query_id
stringlengths
1
6
query
stringlengths
2
185
positive_passages
listlengths
1
121
negative_passages
listlengths
15
100
1840213
Tree Edit Distance Learning via Adaptive Symbol Embeddings
[ { "docid": "pos:1840213_0", "text": "This paper considers the problem of computing the editing distance between unordered, labeled trees. We give efficient polynomial-time algorithms for the case when one tree is a string or has a bounded number of leaves. By contrast, we show that the problem is NP -complete even for binary trees having a label alphabet of size two. keywords: Computational Complexity, Unordered trees, NP -completeness.", "title": "" }, { "docid": "pos:1840213_1", "text": "We propose a new learning method, \"Generalized Learning Vector Quantization (GLVQ),\" in which reference vectors are updated based on the steepest descent method in order to minimize the cost function . The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and thus degrades recognition ability. Experimental results for printed Chinese character recognition reveal that GLVQ is superior to LVQ in recognition ability.", "title": "" } ]
[ { "docid": "neg:1840213_0", "text": "MAC address randomization is a common privacy protection measure deployed in major operating systems today. It is used to prevent user-tracking with probe requests that are transmitted during IEEE 802.11 network scans. We present an attack to defeat MAC address randomization through observation of the timings of the network scans with an off-the-shelf Wi-Fi interface. This attack relies on a signature based on inter-frame arrival times of probe requests, which is used to group together frames coming from the same device although they use distinct MAC addresses. We propose several distance metrics based on timing and use them together with an incremental learning algorithm in order to group frames. We show that these signatures are consistent over time and can be used as a pseudo-identifier to track devices. Our framework is able to correctly group frames using different MAC addresses but belonging to the same device in up to 75% of the cases. These results show that the timing of 802.11 probe frames can be abused to track individual devices and that address randomization alone is not always enough to protect users against tracking.", "title": "" }, { "docid": "neg:1840213_1", "text": "The contour tree is an abstraction of a scalar field that encodes the nesting relationships of isosurfaces. We show how to use the contour tree to represent individual contours of a scalar field, how to simplify both the contour tree and the topology of the scalar field, how to compute and store geometric properties for all possible contours in the contour tree, and how to use the simplified contour tree as an interface for exploratory visualization.", "title": "" }, { "docid": "neg:1840213_2", "text": "Scientific literature cites a wide range of values for caffeine content in food products. The authors suggest the following standard values for the United States: coffee (5 oz) 85 mg for ground roasted coffee, 60 mg for instant and 3 mg for decaffeinated; tea (5 oz): 30 mg for leaf/bag and 20 mg for instant; colas: 18 mg/6 oz serving; cocoa/hot chocolate: 4 mg/5 oz; chocolate milk: 4 mg/6 oz; chocolate candy: 1.5-6.0 mg/oz. Some products from the United Kingdom and Denmark have higher caffeine content. Caffeine consumption survey data are limited. Based on product usage and available consumption data, the authors suggest a mean daily caffeine intake for US consumers of 4 mg/kg. Among children younger than 18 years of age who are consumers of caffeine-containing foods, the mean daily caffeine intake is about 1 mg/kg. Both adults and children in Denmark and UK have higher levels of caffeine intake.", "title": "" }, { "docid": "neg:1840213_3", "text": "In this paper, we present for the first time the realization of a 77 GHz chip-to-rectangular waveguide transition realized in an embedded Wafer Level Ball Grid Array (eWLB) package. The chip is contacted with a coplanar waveguide (CPW). For the transformation of the transverse electromagnetic (TEM) mode of the CPW line to the transverse electric (TE) mode of the rectangular waveguide an insert is used in the eWLB package. This insert is based on radio-frequency (RF) printed circuit board (PCB) technology. Micro vias formed in the insert are used to realize the sidewalls of the rectangular waveguide structure in the fan-out area of the eWLB package. The redistribution layers (RDLs) on the top and bottom surface of the package form the top and bottom wall, respectively. We present two possible variants of transforming the TEM mode to the TE mode. The first variant uses a via realized in the rectangular waveguide structure. The second variant uses only the RDLs of the eWLB package for mode conversion. We present simulation and measurement results of both variants. We obtain an insertion loss of 1.5 dB and return loss better than 10 dB. The presented results show that this approach is an attractive candidate for future low loss and highly integrated RF systems.", "title": "" }, { "docid": "neg:1840213_4", "text": "A practical limitation of deep neural networks is their high degree of specialization to a single task and visual domain. Recently, inspired by the successes of transfer learning, several authors have proposed to learn instead universal feature extractors that, used as the first stage of any deep network, work well for several tasks and domains simultaneously. Nevertheless, such universal features are still somewhat inferior to specialized networks. To overcome this limitation, in this paper we propose to consider instead universal parametric families of neural networks, which still contain specialized problem-specific models, but differing only by a small number of parameters. We study different designs for such parametrizations, including series and parallel residual adapters, joint adapter compression, and parameter allocations, and empirically identify the ones that yield the highest compression. We show that, in order to maximize performance, it is necessary to adapt both shallow and deep layers of a deep network, but the required changes are very small. We also show that these universal parametrization are very effective for transfer learning, where they outperform traditional fine-tuning techniques.", "title": "" }, { "docid": "neg:1840213_5", "text": "Technology must work for human race and improve the way help reaches a person in distress in the shortest possible time. In a developing nation like India, with the advancement in the transportation technology and rise in the total number of vehicles, road accidents are increasing at an alarming rate. If an accident occurs, the victim's survival rate increases when you give immediate medical assistance. You can give medical assistance to an accident victim only when you know the exact location of the accident. This paper presents an inexpensive but intelligent framework that can identify and report an accident for two-wheelers. This paper targets two-wheelers because the mortality ratio is highest in two-wheeler accidents in India. This framework includes a microcontroller-based low-cost Accident Detection Unit (ADU) that contains a GPS positioning system and a GSM modem to sense and generate accidental events to a centralized server. The ADU calculates acceleration along with ground clearance of the vehicle to identify the accidental situation. On detecting an accident, ADU sends accident detection parameters, GPS coordinates, and the current time to the Accident Detection Server (ADS). ADS maintain information on the movement of the vehicle according to the historical data, current data, and the rules that you configure in the system. If an accident occurs, ADS notifies the emergency services and the preconfigured mobile numbers for the vehicle that contains this unit.", "title": "" }, { "docid": "neg:1840213_6", "text": "As the number of indoor location based services increase in the mobile market, service providers demanding more accurate indoor positioning information. To improve the location estimation accuracy, this paper presents an enhanced client centered location prediction scheme and filtering algorithm based on IEEE 802.11 ac standard for indoor positioning system. We proposed error minimization filtering algorithm for accurate location prediction and also introduce IEEE 802.11 ac mobile client centered location correction scheme without modification of preinstalled infrastructure Access Point. Performance evaluation based on MATLAB simulation highlight the enhanced location prediction performance. We observe a significant error reduction of angle estimation.", "title": "" }, { "docid": "neg:1840213_7", "text": "Software defect prediction, which predicts defective code regions, can help developers find bugs and prioritize their testing efforts. To build accurate prediction models, previous studies focus on manually designing features that encode the characteristics of programs and exploring different machine learning algorithms. Existing traditional features often fail to capture the semantic differences of programs, and such a capability is needed for building accurate prediction models.\n To bridge the gap between programs' semantics and defect prediction features, this paper proposes to leverage a powerful representation-learning algorithm, deep learning, to learn semantic representation of programs automatically from source code. Specifically, we leverage Deep Belief Network (DBN) to automatically learn semantic features from token vectors extracted from programs' Abstract Syntax Trees (ASTs).\n Our evaluation on ten open source projects shows that our automatically learned semantic features significantly improve both within-project defect prediction (WPDP) and cross-project defect prediction (CPDP) compared to traditional features. Our semantic features improve WPDP on average by 14.7% in precision, 11.5% in recall, and 14.2% in F1. For CPDP, our semantic features based approach outperforms the state-of-the-art technique TCA+ with traditional features by 8.9% in F1.", "title": "" }, { "docid": "neg:1840213_8", "text": "Iterative refinement reduces the roundoff errors in the computed solution to a system of linear equations. Only one step requires higher precision arithmetic. If sufficiently high precision is used, the final result is shown to be very accurate.", "title": "" }, { "docid": "neg:1840213_9", "text": "OBJECTIVE\nCompared to other eating disorders, anorexia nervosa (AN) has the highest rates of completed suicide whereas suicide attempt rates are similar or lower than in bulimia nervosa (BN). Attempted suicide is a key predictor of suicide, thus this mismatch is intriguing. We sought to explore whether the clinical characteristics of suicidal acts differ between suicide attempters with AN, BN or without an eating disorders (ED).\n\n\nMETHOD\nCase-control study in a cohort of suicide attempters (n = 1563). Forty-four patients with AN and 71 with BN were compared with 235 non-ED attempters matched for sex, age and education, using interview measures of suicidal intent and severity.\n\n\nRESULTS\nAN patients were more likely to have made a serious attempt (OR = 3.4, 95% CI 1.4-7.9), with a higher expectation of dying (OR = 3.7,95% CI 1.1-13.5), and an increased risk of severity (OR = 3.4,95% CI 1.2-9.6). BN patients did not differ from the control group. Clinical markers of the severity of ED were associated with the seriousness of the attempt.\n\n\nCONCLUSION\nThere are distinct features of suicide attempts in AN. This may explain the higher suicide rates in AN. Higher completed suicide rates in AN may be partially explained by AN patients' higher desire to die and their more severe and lethal attempts.", "title": "" }, { "docid": "neg:1840213_10", "text": "In this work, we provide a new formulation for Graph Convolutional Neural Networks (GCNNs) for link prediction on graph data that addresses common challenges for biomedical knowledge graphs (KGs). We introduce a regularized attention mechanism to GCNNs that not only improves performance on clean datasets, but also favorably accommodates noise in KGs, a pervasive issue in realworld applications. Further, we explore new visualization methods for interpretable modelling and to illustrate how the learned representation can be exploited to automate dataset denoising. The results are demonstrated on a synthetic dataset, the common benchmark dataset FB15k-237, and a large biomedical knowledge graph derived from a combination of noisy and clean data sources. Using these improvements, we visualize a learned model’s representation of the disease cystic fibrosis and demonstrate how to interrogate a neural network to show the potential of PPARG as a candidate therapeutic target for rheumatoid arthritis.", "title": "" }, { "docid": "neg:1840213_11", "text": "In multiple attribute decision analysis (MADA), one often needs to deal with both numerical data and qualitative information with uncertainty. It is essential to properly represent and use uncertain information to conduct rational decision analysis. Based on a multilevel evaluation framework, an evidential reasoning (ER) approach has been developed for supporting such decision analysis, the kernel of which is an ER algorithm developed on the basis of the framework and the evidence combination rule of the Dempster–Shafer (D–S) theory. The approach has been applied to engineering design selection, organizational self-assessment, safety and risk assessment, and supplier assessment. In this paper, the fundamental features of the ER approach are investigated. New schemes for weight normalization and basic probability assignments are proposed. The original ER approach is further developed to enhance the process of aggregating attributes with uncertainty. Utility intervals are proposed to describe the impact of ignorance on decision analysis. Several properties of the new ER approach are explored, which lay the theoretical foundation of the ER approach. A numerical example of a motorcycle evaluation problem is examined using the ER approach. Computation steps and analysis results are provided in order to demonstrate its implementation process.", "title": "" }, { "docid": "neg:1840213_12", "text": "AIMS AND OBJECTIVES\nTo examine the association between trait emotional intelligence and learning strategies and their influence on academic performance among first-year accelerated nursing students.\n\n\nDESIGN\nThe study used a prospective survey design.\n\n\nMETHODS\nA sample size of 81 students (100% response rate) who undertook the accelerated nursing course at a large university in Sydney participated in the study. Emotional intelligence was measured using the adapted version of the 144-item Trait Emotional Intelligence Questionnaire. Four subscales of the Motivated Strategies for Learning Questionnaire were used to measure extrinsic goal motivation, peer learning, help seeking and critical thinking among the students. The grade point average score obtained at the end of six months was used to measure academic achievement.\n\n\nRESULTS\nThe results demonstrated a statistically significant correlation between emotional intelligence scores and critical thinking (r = 0.41; p < 0.001), help seeking (r = 0.33; p < 0.003) and peer learning (r = 0.32; p < 0.004) but not with extrinsic goal orientation (r = -0.05; p < 0.677). Emotional intelligence emerged as a significant predictor of academic achievement (β = 0.25; p = 0.023).\n\n\nCONCLUSION\nIn addition to their learning styles, higher levels of awareness and understanding of their own emotions have a positive impact on students' academic achievement. Higher emotional intelligence may lead students to pursue their interests more vigorously and think more expansively about subjects of interest, which could be an explanatory factor for higher academic performance in this group of nursing students.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nThe concepts of emotional intelligence are central to clinical practice as nurses need to know how to deal with their own emotions as well as provide emotional support to patients and their families. It is therefore essential that these skills are developed among student nurses to enhance the quality of their clinical practice.", "title": "" }, { "docid": "neg:1840213_13", "text": "Many mature term-based or pattern-based approaches have been used in the field of information filtering to generate users' information needs from a collection of documents. A fundamental assumption for these approaches is that the documents in the collection are all about one topic. However, in reality users' interests can be diverse and the documents in the collection often involve multiple topics. Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, and this has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering has not been so well explored. Patterns are always thought to be more discriminative than single terms for describing documents. However, the enormous amount of discovered patterns hinder them from being effectively and efficiently used in real applications, therefore, selection of the most discriminative and representative patterns from the huge amount of discovered patterns becomes crucial. To deal with the above mentioned limitations and problems, in this paper, a novel information filtering model, Maximum matched Pattern-based Topic Model (MPBTM), is proposed. The main distinctive features of the proposed model include: (1) user information needs are generated in terms of multiple topics; (2) each topic is represented by patterns; (3) patterns are generated from topic models and are organized in terms of their statistical and taxonomic features; and (4) the most discriminative and representative patterns, called Maximum Matched Patterns, are proposed to estimate the document relevance to the user's information needs in order to filter out irrelevant documents. Extensive experiments are conducted to evaluate the effectiveness of the proposed model by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model significantly outperforms both state-of-the-art term-based models and pattern-based models.", "title": "" }, { "docid": "neg:1840213_14", "text": "Cloud computing providers such as Amazon and Google have recently begun offering container-instances, which provide an efficient route to application deployment within a lightweight, isolated and well-defined execution environment. Cloud providers currently offer Container Service Platforms (CSPs), which orchestrate containerised applications. Existing CSP frameworks do not offer any form of intelligent resource scheduling: applications are usually scheduled individually, rather than taking a holistic view of all registered applications and available resources in the cloud. This can result in increased execution times for applications, resource wastage through underutilised container-instances, and a reduction in the number of applications that can be deployed, given the available resources. The research presented in this paper aims to extend existing systems by adding a cloud-based Container Management Service (CMS) framework that offers increased deployment density, scalability and resource efficiency. CMS provides additional functionalities for orchestrating containerised applications by joint optimisation of sets of containerised applications, and resource pool in multiple (geographical distributed) cloud regions. We evaluated CMS on a cloud-based CSP i.e., Amazon EC2 Container Management Service (ECS) and conducted extensive experiments using sets of CPU and Memory intensive containerised applications against the direct deployment strategy of Amazon ECS. The results show that CMS achieves up to 25% higher cluster utilisation, and up to 70% reduction in execution times.", "title": "" }, { "docid": "neg:1840213_15", "text": "Endometriosis is a benign and common disorder that is characterized by ectopic endometrium outside the uterus. Extrapelvic endometriosis, like of the vulva, is rarely seen. We report a case of a 47-year-old woman referred to our clinic due to complaints of a vulvar mass and periodic swelling of the mass at the time of menstruation. During surgery, the cyst ruptured and a chocolate-colored liquid escaped onto the surgical field. The cyst was extirpated totally. Hipstopathological examination showed findings compatible with endometriosis. She was asked to follow-up after three weeks. The patient had no complaints and the incision field was clear at the follow-up.", "title": "" }, { "docid": "neg:1840213_16", "text": "This paper describes a self-localization for indoor mobile robots based on integrating measurement values from multiple optical mouse sensors and a global camera. This paper consists of two parts. Firstly, we propose a dead-reckoning based on increments of the robot movements read directly from the floor using optical mouse sensors. Since the measurement values from multiple optical mouse sensors are compared to each other and only the reliable values are selected, accurate dead-reckoning can be realized compared with the conventional method based on increments of wheel rotations. Secondly, in order to realize robust localization, we propose a method of estimating position and orientation by integrating measured robot position (orientation information is not included) via global camera and dead-reckoning with the Kalman filter", "title": "" }, { "docid": "neg:1840213_17", "text": "In this paper, lossless and near-lossless compression algorithms for multichannel electroencephalogram (EEG) signals are presented based on image and volumetric coding. Multichannel EEG signals have significant correlation among spatially adjacent channels; moreover, EEG signals are also correlated across time. Suitable representations are proposed to utilize those correlations effectively. In particular, multichannel EEG is represented either in the form of image (matrix) or volumetric data (tensor), next a wavelet transform is applied to those EEG representations. The compression algorithms are designed following the principle of “lossy plus residual coding,” consisting of a wavelet-based lossy coding layer followed by arithmetic coding on the residual. Such approach guarantees a specifiable maximum error between original and reconstructed signals. The compression algorithms are applied to three different EEG datasets, each with different sampling rate and resolution. The proposed multichannel compression algorithms achieve attractive compression ratios compared to algorithms that compress individual channels separately.", "title": "" }, { "docid": "neg:1840213_18", "text": "Automatically filtering relevant information about a real-world incident from Social Web streams and making the information accessible and findable in the given context of the incident are non-trivial scientific challenges. In this paper, we engineer and evaluate solutions that analyze the semantics of Social Web data streams to solve these challenges. We introduce Twitcident, a framework and Web-based system for filtering, searching and analyzing information about real-world incidents or crises. Given an incident, our framework automatically starts tracking and filtering information that is relevant for the incident from Social Web streams and Twitter particularly. It enriches the semantics of streamed messages to profile incidents and to continuously improve and adapt the information filtering to the current temporal context. Faceted search and analytical tools allow people and emergency services to retrieve particular information fragments and overview and analyze the current situation as reported on the Social Web.\n We put our Twitcident system into practice by connecting it to emergency broadcasting services in the Netherlands to allow for the retrieval of relevant information from Twitter streams for any incident that is reported by those services. We conduct large-scale experiments in which we evaluate (i) strategies for filtering relevant information for a given incident and (ii) search strategies for finding particular information pieces. Our results prove that the semantic enrichment offered by our framework leads to major and significant improvements of both the filtering and the search performance. A demonstration is available via: http://wis.ewi.tudelft.nl/twitcident/", "title": "" }, { "docid": "neg:1840213_19", "text": "This article proposes an informational perspective on comparison consequences in social judgment. It is argued that to understand the variable consequences of comparison, one has to examine what target knowledge is activated during the comparison process. These informational underpinnings are conceptualized in a selective accessibility model that distinguishes 2 fundamental comparison processes. Similarity testing selectively makes accessible knowledge indicating target-standard similarity, whereas dissimilarity testing selectively makes accessible knowledge indicating target-standard dissimilarity. These respective subsets of target knowledge build the basis for subsequent target evaluations, so that similarity testing typically leads to assimilation whereas dissimilarity testing typically leads to contrast. The model is proposed as a unifying conceptual framework that integrates diverse findings on comparison consequences in social judgment.", "title": "" } ]
1840214
RFID technology: Beyond cash-based methods in vending machine
[ { "docid": "pos:1840214_0", "text": "Today, at the low end of the communication protocols we find the inter-integrated circuit (I2C) and the serial peripheral interface (SPI) protocols. Both protocols are well suited for communications between integrated circuits for slow communication with on-board peripherals. The two protocols coexist in modern digital electronics systems, and they probably will continue to compete in the future, as both I2C and SPI are actually quite complementary for this kind of communication.", "title": "" }, { "docid": "pos:1840214_1", "text": "The purpose of this paper is to develop a wireless system to detect and maintain the attendance of a student and locate a student. For, this the students ID (identification) card is tagged with an Radio-frequency identification (RFID) passive tag which is matched against the database and only finalized once his fingerprint is verified using the biometric fingerprint scanner. The guardian is intimated by a sms (short message service) sent using the GSM (Global System for Mobile Communications) Modem of the same that is the student has reached the university or not on a daily basis, in the present day every guardian is worried whether his child has reached safely or not. In every classroom, laboratory, libraries, staffrooms etc. a RFID transponder is installed through which we will be detecting the location of the student and staff. There will be a website through which the student, teacher and the guardians can view the status of attendance and location of a student at present in the campus. A person needs to be located can be done by two means that is via the website or by sending the roll number of the student as an sms to the GSM modem which will reply by taking the last location stored of the student in the database.", "title": "" }, { "docid": "pos:1840214_2", "text": "In this paper we propose an implementation technique for sequential circuit using single electron tunneling technology (SET-s) with the example of designing of a “coffee vending machine” with the goal of getting low power and faster operation. We implement the proposed design based on single electron encoded logic (SEEL).The circuit is tested and compared with the existing CMOS technology.", "title": "" } ]
[ { "docid": "neg:1840214_0", "text": "Inspired by the aerial maneuvering ability of lizards, we present the design and control of MSU (Michigan State University) tailbot - a miniature-tailed jumping robot. The robot can not only wheel on the ground, but also jump up to overcome obstacles. Moreover, once leaping into the air, it can control its body angle using an active tail to dynamically maneuver in midair for safe landings. We derive the midair dynamics equation and design controllers, such as a sliding mode controller, to stabilize the body at desired angles. To the best of our knowledge, this is the first miniature (maximum size 7.5 cm) and lightweight (26.5 g) robot that can wheel on the ground, jump to overcome obstacles, and maneuver in midair. Furthermore, tailbot is equipped with on-board energy, sensing, control, and wireless communication capabilities, enabling tetherless or autonomous operations. The robot in this paper exemplifies the integration of mechanical design, embedded system, and advanced control methods that will inspire the next-generation agile robots mimicking their biological counterparts. Moreover, it can serve as mobile sensor platforms for wireless sensor networks with many field applications.", "title": "" }, { "docid": "neg:1840214_1", "text": "To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of markers and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction and extension to multi-view reconstruction. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org.", "title": "" }, { "docid": "neg:1840214_2", "text": "The twenty-first century global population will be increasingly urban-focusing the sustainability challenge on cities and raising new challenges to address urban resilience capacity. Landscape ecologists are poised to contribute to this challenge in a transdisciplinary mode in which science and research are integrated with planning policies and design applications. Five strategies to build resilience capacity and transdisciplinary collaboration are proposed: biodiversity; urban ecological networks and connectivity; multifunctionality; redundancy and modularization, adaptive design. Key research questions for landscape ecologists, planners and designers are posed to advance the development of knowledge in an adaptive mode.", "title": "" }, { "docid": "neg:1840214_3", "text": "Coercing new programmers to adopt disciplined development practices such as thorough unit testing is a challenging endeavor. Test-driven development (TDD) has been proposed as a solution to improve both software design and testing. Test-driven learning (TDL) has been proposed as a pedagogical approach for teaching TDD without imposing significant additional instruction time.\n This research evaluates the effects of students using a test-first (TDD) versus test-last approach in early programming courses, and considers the use of TDL on a limited basis in CS1 and CS2. Software testing, programmer productivity, programmer performance, and programmer opinions are compared between test-first and test-last programming groups. Results from this research indicate that a test-first approach can increase student testing and programmer performance, but that early programmers are very reluctant to adopt a test-first approach, even after having positive experiences using TDD. Further, this research demonstrates that TDL can be applied in CS1/2, but suggests that a more pervasive implementation of TDL may be necessary to motivate and establish disciplined testing practice among early programmers.", "title": "" }, { "docid": "neg:1840214_4", "text": "With the rapid growth of battery-powered portable electronics, an efficient power management solution is necessary for extending battery life. Generally, basic switching regulators, such as buck and boost converters, may not be capable of using the entire battery output voltage range (e.g., 2.5-4.7 V for Li-ion batteries) to provide a fixed output voltage (e.g., 3.3 V). In this paper, an average-current-mode noninverting buck-boost dc-dc converter is proposed. It is not only able to use the full output voltage range of a Li-ion battery, but it also features high power efficiency and excellent noise immunity. The die area of this chip is 2.14 × 1.92 mm2, fabricated by using TSMC 0.35 μm 2P4M 3.3 V/5 V mixed-signal polycide process. The input voltage of the converter may range from 2.3 to 5 V with its output voltage set to 3.3 V, and its switching frequency is 500 kHz. Moreover, it can provide up to 400-mA load current, and the maximal measured efficiency is 92.01%.", "title": "" }, { "docid": "neg:1840214_5", "text": "Creating rich representations of environments requires integration of multiple sensing modalities with complementary characteristics such as range and imaging sensors. To precisely combine multisensory information, the rigid transformation between different sensor coordinate systems (i.e., extrinsic parameters) must be estimated. The majority of existing extrinsic calibration techniques require one or multiple planar calibration patterns (such as checkerboards) to be observed simultaneously from the range and imaging sensors. The main limitation of these approaches is that they require modifying the scene with artificial targets. In this paper, we present a novel algorithm for extrinsically calibrating a range sensor with respect to an image sensor with no requirement of external artificial targets. The proposed method exploits natural linear features in the scene to precisely determine the rigid transformation between the coordinate frames. First, a set of 3D lines (plane intersection and boundary line segments) are extracted from the point cloud, and a set of 2D line segments are extracted from the image. Correspondences between the 3D and 2D line segments are used as inputs to an optimization problem which requires jointly estimating the relative translation and rotation between the coordinate frames. The proposed method is not limited to any particular types or configurations of sensors. To demonstrate robustness, efficiency and generality of the presented algorithm, we include results using various sensor configurations.", "title": "" }, { "docid": "neg:1840214_6", "text": "While the subject of cyberbullying of children and adolescents has begun to be addressed, less attention and research have focused on cyberbullying in the workplace. Male-dominated workplaces such as manufacturing settings are found to have an increased risk of workplace bullying, but the prevalence of cyberbullying in this sector is not known. This exploratory study investigated the prevalence and methods of face-to-face bullying and cyberbullying of males at work. One hundred three surveys (a modified version of the revised Negative Acts Questionnaire [NAQ-R]) were returned from randomly selected members of the Australian Manufacturing Workers' Union (AMWU). The results showed that 34% of respondents were bullied face-to-face, and 10.7% were cyberbullied. All victims of cyberbullying also experienced face-to-face bullying. The implications for organizations' \"duty of care\" in regard to this new form of bullying are indicated.", "title": "" }, { "docid": "neg:1840214_7", "text": "1. THE GALLERY This paper describes the design and implementation of a virtual art gallery on the Web. The main objective of the virtual art gallery is to provide a navigational environment for the Networked Digital Library of Theses and Dissertations (NDLTD see http://www.ndltd.org and Fox et al., 1996), where users can browse and search electronic theses and dissertations (ETDs) indexed by the pictures--images and figures--that the ETDs contain. The galleries in the system and the ETD collection are simultaneously organized using a college/department taxonomic hierarchy. The system architecture has the general characteristics of the client-server model. Users access the system by using a Web browser to follow a link to one of the named art galleries. This link has the form of a CGI query which is passed by the Web server to a CGI program that handles it as a data request by accessing the database server. The query results are used to generate a view of the art gallery (as a VRML or HTML file) and are sent back to the user’s browser. We employ several technologies for improving the system’s efficiency and flexibility. We define a Simple Template Language (STL) to specify the format of VRML or HTML templates. Documents are generated by merging pre-designed VRML or HTML files and the data coming from the Database. Therefore, the generated virtual galleries always contain the most recent database information. We also define a Data Service Protocol (DSP), which normalizes the request from the user to communicate it with the Data Server and ultimately the image database.", "title": "" }, { "docid": "neg:1840214_8", "text": "SUMMARY\nSOAP2 is a significantly improved version of the short oligonucleotide alignment program that both reduces computer memory usage and increases alignment speed at an unprecedented rate. We used a Burrows Wheeler Transformation (BWT) compression index to substitute the seed strategy for indexing the reference sequence in the main memory. We tested it on the whole human genome and found that this new algorithm reduced memory usage from 14.7 to 5.4 GB and improved alignment speed by 20-30 times. SOAP2 is compatible with both single- and paired-end reads. Additionally, this tool now supports multiple text and compressed file formats. A consensus builder has also been developed for consensus assembly and SNP detection from alignment of short reads on a reference genome.\n\n\nAVAILABILITY\nhttp://soap.genomics.org.cn.", "title": "" }, { "docid": "neg:1840214_9", "text": "Aspect-based opinion mining from online reviews has attracted a lot of attention recently. The main goal of all of the proposed methods is extracting aspects and/or estimating aspect ratings. Recent works, which are often based on Latent Dirichlet Allocation (LDA), consider both tasks simultaneously. These models are normally trained at the item level, i.e., a model is learned for each item separately. Learning a model per item is fine when the item has been reviewed extensively and has enough training data. However, in real-life data sets such as those from Epinions.com and Amazon.com more than 90% of items have less than 10 reviews, so-called cold start items. State-of-the-art LDA models for aspect-based opinion mining are trained at the item level and therefore perform poorly for cold start items due to the lack of sufficient training data. In this paper, we propose a probabilistic graphical model based on LDA, called Factorized LDA (FLDA), to address the cold start problem. The underlying assumption of FLDA is that aspects and ratings of a review are influenced not only by the item but also by the reviewer. It further assumes that both items and reviewers can be modeled by a set of latent factors which represent their aspect and rating distributions. Different from state-of-the-art LDA models, FLDA is trained at the category level and learns the latent factors using the reviews of all the items of a category, in particular the non cold start items, and uses them as prior for cold start items. Our experiments on three real-life data sets demonstrate the improved effectiveness of the FLDA model in terms of likelihood of the held-out test set. We also evaluate the accuracy of FLDA based on two application-oriented measures.", "title": "" }, { "docid": "neg:1840214_10", "text": "Finding semantically rich and computer-understandable representations for textual dialogues, utterances and words is crucial for dialogue systems (or conversational agents), as their performance mostly depends on understanding the context of conversations. In recent research approaches, responses have been generated utilizing a decoder architecture, given the distributed vector representation (embedding) of the current conversation. In this paper, the utilization of embeddings for answer retrieval is explored by using Locality-Sensitive Hashing Forest (LSH Forest), an Approximate Nearest Neighbor (ANN) model, to find similar conversations in a corpus and rank possible candidates. Experimental results on the well-known Ubuntu Corpus (in English) and a customer service chat dataset (in Dutch) show that, in combination with a candidate selection method, retrieval-based approaches outperform generative ones and reveal promising future research directions towards the usability of such a system.", "title": "" }, { "docid": "neg:1840214_11", "text": "Increasing emphasis has been placed on the use of effect size reporting in the analysis of social science data. Nonetheless, the use of effect size reporting remains inconsistent, and interpretation of effect size estimates continues to be confused. Researchers are presented with numerous effect sizes estimate options, not all of which are appropriate for every research question. Clinicians also may have little guidance in the interpretation of effect sizes relevant for clinical practice. The current article provides a primer of effect size estimates for the social sciences. Common effect sizes estimates, their use, and interpretations are presented as a guide for researchers.", "title": "" }, { "docid": "neg:1840214_12", "text": "With the new generation of full low floor passenger trains, the constraints of weight and size on the traction transformer are becoming stronger. The ultimate target weight for the transformer is 1 kg/kVA. The reliability and the efficiency are also becoming more important. To address these issues, a multilevel topology using medium frequency transformers has been developed. It permits to reduce the weight and the size of the system and improves the global life cycle cost of the vehicle. The proposed multilevel converter consists of sixteen bidirectional direct current converters (cycloconverters) connected in series to the catenary 15 kV, 16.7 Hz through a choke inductor. The cycloconverters are connected to sixteen medium frequency transformers (400 Hz) that are fed by sixteen four-quadrant converters connected in parallel to a 1.8 kV DC link with a 2f filter. The control, the command and the hardware of the prototype are described in detail.", "title": "" }, { "docid": "neg:1840214_13", "text": "The Luneburg lens is an aberration-free lens that focuses light from all directions equally well. We fabricated and tested a Luneburg lens in silicon photonics. Such fully-integrated lenses may become the building blocks of compact Fourier optics on chips. Furthermore, our fabrication technique is sufficiently versatile for making perfect imaging devices on silicon platforms.", "title": "" }, { "docid": "neg:1840214_14", "text": "It is common practice for developers of user-facing software to transform a mock-up of a graphical user interface (GUI) into code. This process takes place both at an application’s inception and in an evolutionary context as GUI changes keep pace with evolving features. Unfortunately, this practice is challenging and time-consuming. In this paper, we present an approach that automates this process by enabling accurate prototyping of GUIs via three tasks: detection, classification, and assembly. First, logical components of a GUI are detected from a mock-up artifact using either computer vision techniques or mock-up metadata. Then, software repository mining, automated dynamic analysis, and deep convolutional neural networks are utilized to accurately classify GUI-components into domain-specific types (e.g., toggle-button). Finally, a data-driven, K-nearest-neighbors algorithm generates a suitable hierarchical GUI structure from which a prototype application can be automatically assembled. We implemented this approach for Android in a system called REDRAW. Our evaluation illustrates that REDRAW achieves an average GUI-component classification accuracy of 91% and assembles prototype applications that closely mirror target mock-ups in terms of visual affinity while exhibiting reasonable code structure. Interviews with industrial practitioners illustrate ReDraw’s potential to improve real development workflows.", "title": "" }, { "docid": "neg:1840214_15", "text": "Developmental programming resulting from maternal malnutrition can lead to an increased risk of metabolic disorders such as obesity, insulin resistance, type 2 diabetes and cardiovascular disorders in the offspring in later life. Furthermore, many conditions linked with developmental programming are also known to be associated with the aging process. This review summarizes the available evidence about the molecular mechanisms underlying these effects, with the potential to identify novel areas of therapeutic intervention. This could also lead to the discovery of new treatment options for improved patient outcomes.", "title": "" }, { "docid": "neg:1840214_16", "text": "The authors realize a 50% length reduction of short-slot couplers in a post-wall dielectric substrate by two techniques. One is to introduce hollow rectangular holes near the side walls of the coupled region. The difference of phase constant between the TE10 and TE20 propagating modes increases and the required length to realize a desired dividing ratio is reduced. Another is to remove two reflection-suppressing posts in the coupled region. The length of the coupled region is determined to cancel the reflections at both ends of the coupled region. The total length of a 4-way Butler matrix can be reduced to 48% in comparison with the conventional one and the couplers still maintain good dividing characteristics; the dividing ratio of the hybrid is less than 0.1 dB and the isolations of the couplers are more than 20 dB. key words: short-slot coupler, length reduction, Butler matrix, post-wall waveguide, dielectric substrate, rectangular hole", "title": "" }, { "docid": "neg:1840214_17", "text": "This report describes the difficulties of training neural networks and in particular deep neural networks. It then provides a literature review of training methods for deep neural networks, with a focus on pre-training. It focuses on Deep Belief Networks composed of Restricted Boltzmann Machines and Stacked Autoencoders and provides an outreach on further and alternative approaches. It also includes related practical recommendations from the literature on training them. In the second part, initial experiments using some of the covered methods are performed on two databases. In particular, experiments are performed on the MNIST hand-written digit dataset and on facial emotion data from a Kaggle competition. The results are discussed in the context of results reported in other research papers. An error rate lower than the best contribution to the Kaggle competition is achieved using an optimized Stacked Autoencoder.", "title": "" }, { "docid": "neg:1840214_18", "text": "The fastest tools for network reachability queries use adhoc algorithms to compute all packets from a source S that can reach a destination D. This paper examines whether network reachability can be solved efficiently using existing verification tools. While most verification tools only compute reachability (“Can S reach D?”), we efficiently generalize them to compute all reachable packets. Using new and old benchmarks, we compare model checkers, SAT solvers and various Datalog implementations. The only existing verification method that worked competitively on all benchmarks in seconds was Datalog with a new composite Filter-Project operator and a Difference of Cubes representation. While Datalog is slightly slower than the Hassel C tool, it is far more flexible. We also present new results that more precisely characterize the computational complexity of network verification. This paper also provides a gentle introduction to program verification for the networking community.", "title": "" }, { "docid": "neg:1840214_19", "text": "Recurrent Neural Networks (RNNs) are powerful models for sequential data that have the potential to learn long-term dependencies. However, they are computationally expensive to train and difficult to parallelize. Recent work has shown that normalizing intermediate representations of neural networks can significantly improve convergence rates in feed-forward neural networks [1]. In particular, batch normalization, which uses mini-batch statistics to standardize features, was shown to significantly reduce training time. In this paper, we investigate how batch normalization can be applied to RNNs. We show for both a speech recognition task and language modeling that the way we apply batch normalization leads to a faster convergence of the training criterion but doesn't seem to improve the generalization performance.", "title": "" } ]
1840215
Deep Learning for Lip Reading using Audio-Visual Information for Urdu Language
[ { "docid": "pos:1840215_0", "text": "Lipreading is the task of decoding text from the movement of a speaker’s mouth. Traditional approaches separated the problem into two stages: designing or learning visual features, and prediction. More recent deep lipreading approaches are end-to-end trainable (Wand et al., 2016; Chung & Zisserman, 2016a). All existing works, however, perform only word classification, not sentence-level sequence prediction. Studies have shown that human lipreading performance increases for longer words (Easton & Basala, 1982), indicating the importance of features capturing temporal context in an ambiguous communication channel. Motivated by this observation, we present LipNet, a model that maps a variable-length sequence of video frames to text, making use of spatiotemporal convolutions, an LSTM recurrent network, and the connectionist temporal classification loss, trained entirely end-to-end. To the best of our knowledge, LipNet is the first lipreading model to operate at sentence-level, using a single end-to-end speaker-independent deep model to simultaneously learn spatiotemporal visual features and a sequence model. On the GRID corpus, LipNet achieves 93.4% accuracy, outperforming experienced human lipreaders and the previous 79.6% state-of-the-art accuracy.", "title": "" } ]
[ { "docid": "neg:1840215_0", "text": "Massive Open Online Courses or MOOCs, both in their major approaches xMOOC and cMOOC, are attracting a very interesting debate about their influence in the higher education future. MOOC have both great defenders and detractors. As an emerging trend, MOOCs have a hype repercussion in technological and pedagogical areas, but they should demonstrate their real value in specific implementation and within institutional strategies. Independently, MOOCs have different issues such as high dropout rates and low number of cooperative activities among participants. This paper presents an adaptive proposal to be applied in MOOC definition and development, with a special attention to cMOOC, that may be useful to tackle the mentioned MOOC problems.", "title": "" }, { "docid": "neg:1840215_1", "text": "Substantial evidence suggests that mind-wandering typically occurs at a significant cost to performance. Mind-wandering–related deficits in performance have been observed in many contexts, most notably reading, tests of sustained attention, and tests of aptitude. Mind-wandering has been shown to negatively impact reading comprehension and model building, impair the ability to withhold automatized responses, and disrupt performance on tests of working memory and intelligence. These empirically identified costs of mind-wandering have led to the suggestion that mind-wandering may represent a pure failure of cognitive control and thus pose little benefit. However, emerging evidence suggests that the role of mind-wandering is not entirely pernicious. Recent studies have shown that mind-wandering may play a crucial role in both autobiographical planning and creative problem solving, thus providing at least two possible adaptive functions of the phenomenon. This article reviews these observed costs and possible functions of mind-wandering and identifies important avenues of future inquiry.", "title": "" }, { "docid": "neg:1840215_2", "text": "The coverage of a test suite is often used as a proxy for its ability to detect faults. However, previous studies that investigated the correlation between code coverage and test suite effectiveness have failed to reach a consensus about the nature and strength of the relationship between these test suite characteristics. Moreover, many of the studies were done with small or synthetic programs, making it unclear whether their results generalize to larger programs, and some of the studies did not account for the confounding influence of test suite size. In addition, most of the studies were done with adequate suites, which are are rare in practice, so the results may not generalize to typical test suites. \n We have extended these studies by evaluating the relationship between test suite size, coverage, and effectiveness for large Java programs. Our study is the largest to date in the literature: we generated 31,000 test suites for five systems consisting of up to 724,000 lines of source code. We measured the statement coverage, decision coverage, and modified condition coverage of these suites and used mutation testing to evaluate their fault detection effectiveness. \n We found that there is a low to moderate correlation between coverage and effectiveness when the number of test cases in the suite is controlled for. In addition, we found that stronger forms of coverage do not provide greater insight into the effectiveness of the suite. Our results suggest that coverage, while useful for identifying under-tested parts of a program, should not be used as a quality target because it is not a good indicator of test suite effectiveness.", "title": "" }, { "docid": "neg:1840215_3", "text": "Study Design Systematic review with meta-analysis. Background The addition of hip strengthening to knee strengthening for persons with patellofemoral pain has the potential to optimize treatment effects. There is a need to systematically review and pool the current evidence in this area. Objective To examine the efficacy of hip strengthening, associated or not with knee strengthening, to increase strength, reduce pain, and improve activity in individuals with patellofemoral pain. Methods A systematic review of randomized and/or controlled trials was performed. Participants in the reviewed studies were individuals with patellofemoral pain, and the experimental intervention was hip and knee strengthening. Outcome data related to muscle strength, pain, and activity were extracted from the eligible trials and combined in a meta-analysis. Results The review included 14 trials involving 673 participants. Random-effects meta-analyses revealed that hip and knee strengthening decreased pain (mean difference, -3.3; 95% confidence interval [CI]: -5.6, -1.1) and improved activity (standardized mean difference, 1.4; 95% CI: 0.03, 2.8) compared to no training/placebo. In addition, hip and knee strengthening was superior to knee strengthening alone for decreasing pain (mean difference, -1.5; 95% CI: -2.3, -0.8) and improving activity (standardized mean difference, 0.7; 95% CI: 0.2, 1.3). Results were maintained beyond the intervention period. Meta-analyses showed no significant changes in strength for any of the interventions. Conclusion Hip and knee strengthening is effective and superior to knee strengthening alone for decreasing pain and improving activity in persons with patellofemoral pain; however, these outcomes were achieved without a concurrent change in strength. Level of Evidence Therapy, level 1a-. J Orthop Sports Phys Ther 2018;48(1):19-31. Epub 15 Oct 2017. doi:10.2519/jospt.2018.7365.", "title": "" }, { "docid": "neg:1840215_4", "text": "We present a parser that relies primarily on extracting information directly from surface spans rather than on propagating information through enriched grammar structure. For example, instead of creating separate grammar symbols to mark the definiteness of an NP, our parser might instead capture the same information from the first word of the NP. Moving context out of the grammar and onto surface features can greatly simplify the structural component of the parser: because so many deep syntactic cues have surface reflexes, our system can still parse accurately with context-free backbones as minimal as Xbar grammars. Keeping the structural backbone simple and moving features to the surface also allows easy adaptation to new languages and even to new tasks. On the SPMRL 2013 multilingual constituency parsing shared task (Seddah et al., 2013), our system outperforms the top single parser system of Björkelund et al. (2013) on a range of languages. In addition, despite being designed for syntactic analysis, our system also achieves stateof-the-art numbers on the structural sentiment task of Socher et al. (2013). Finally, we show that, in both syntactic parsing and sentiment analysis, many broad linguistic trends can be captured via surface features.", "title": "" }, { "docid": "neg:1840215_5", "text": "The introduction of wearable video cameras (e.g., GoPro) in the consumer market has promoted video life-logging, motivating users to generate large amounts of video data. This increasing flow of first-person video has led to a growing need for automatic video summarization adapted to the characteristics and applications of egocentric video. With this paper, we provide the first comprehensive survey of the techniques used specifically to summarize egocentric videos. We present a framework for first-person view summarization and compare the segmentation methods and selection algorithms used by the related work in the literature. Next, we describe the existing egocentric video datasets suitable for summarization and, then, the various evaluation methods. Finally, we analyze the challenges and opportunities in the field and propose new lines of research.", "title": "" }, { "docid": "neg:1840215_6", "text": "Recently, vision-based Advanced Driver Assist Systems have gained broad interest. In this work, we investigate free-space detection, for which we propose to employ a Fully Convolutional Network (FCN). We show that this FCN can be trained in a self-supervised manner and achieve similar results compared to training on manually annotated data, thereby reducing the need for large manually annotated training sets. To this end, our self-supervised training relies on a stereo-vision disparity system, to automatically generate (weak) training labels for the color-based FCN. Additionally, our self-supervised training facilitates online training of the FCN instead of offline. Consequently, given that the applied FCN is relatively small, the free-space analysis becomes highly adaptive to any traffic scene that the vehicle encounters. We have validated our algorithm using publicly available data and on a new challenging benchmark dataset that is released with this paper. Experiments show that the online training boosts performance with 5% when compared to offline training, both for Fmax and AP .", "title": "" }, { "docid": "neg:1840215_7", "text": "This paper presents the results of a systematic review of existing literature on the integration of agile software development with user-centered design approaches. It shows that a common process model underlies such approaches and discusses which artifacts are used to support the collaboration between designers and developers.", "title": "" }, { "docid": "neg:1840215_8", "text": "A novel method of map matching using the Global Positioning System (GPS) has been developed for civilian use, which uses digital mapping data to infer the <100 metres systematic position errors which result largely from “selective availability” (S/A) imposed by the U.S. military. The system tracks a vehicle on all possible roads (road centre-lines) in a computed error region, then uses a method of rapidly detecting inappropriate road centre-lines from the set of all those possible. This is called the Road Reduction Filter (RRF) algorithm. Point positioning is computed using C/A code pseudorange measurements direct from a GPS receiver. The least squares estimation is performed in the software developed for the experiment described in this paper. Virtual differential GPS (VDGPS) corrections are computed and used from a vehicle’s previous positions, thus providing an autonomous alternative to DGPS for in-car navigation and fleet management. Height aiding is used to augment the solution and reduce the number of satellites required for a position solution. Ordnance Survey (OS) digital map data was used for the experiment, i.e. OSCAR 1m resolution road centre-line geometry and Land Form PANORAMA 1:50,000, 50m-grid digital terrain model (DTM). Testing of the algorithm is reported and results are analysed. Vehicle positions provided by RRF are compared with the “true” position determined using high precision (cm) GPS carrier phase techniques. It is shown that height aiding using a DTM and the RRF significantly improve the accuracy of position provided by inexpensive single frequency GPS receivers. INTRODUCTION The accurate location of a vehicle on a highway network model is fundamental to any in-car-navigation system, personal navigation assistant, fleet management system, National Mayday System (Carstensen, 1998) and many other applications that provide a current vehicle location, a digital map and perhaps directions or route guidance. A great", "title": "" }, { "docid": "neg:1840215_9", "text": "The design of the Smart Grid requires solving a complex problem of combined sensing, communications and control and, thus, the problem of choosing a networking technology cannot be addressed without also taking into consideration requirements related to sensor networking and distributed control. These requirements are today still somewhat undefined so that it is not possible yet to give quantitative guidelines on how to choose one communication technology over the other. In this paper, we make a first qualitative attempt to better understand the role that Power Line Communications (PLCs) can have in the Smart Grid. Furthermore, we here report recent results on the electrical and topological properties of the power distribution network. The topological characterization of the power grid is not only important because it allows us to model the grid as an information source, but also because the grid becomes the actual physical information delivery infrastructure when PLCs are used.", "title": "" }, { "docid": "neg:1840215_10", "text": "In this research work a neural network based technique to be applied on condition monitoring and diagnosis of rotating machines equipped with hydrostatic self levitating bearing system is presented. Based on fluid measured data, such pressures and temperature, vibration analysis based diagnosis is being carried out by determining the vibration characteristics of the rotating machines on the basis of signal processing tasks. Required signals are achieved by conversion of measured data (fluid temperature and pressures) into virtual data (vibration magnitudes) by means of neural network functional approximation techniques.", "title": "" }, { "docid": "neg:1840215_11", "text": "One of the common modalities for observing mental activity is electroencephalogram (EEG) signals. However, EEG recording is highly susceptible to various sources of noise and to inter subject differences. In order to solve these problems we present a deep recurrent neural network (RNN) architecture to learn robust features and predict the levels of cognitive load from EEG recordings. Using a deep learning approach, we first transform the EEG time series into a sequence of multispectral images which carries spatial information. Next, we train our recurrent hybrid network to learn robust representations from the sequence of frames. The proposed approach preserves spectral, spatial and temporal structures and extracts features which are less sensitive to variations along each dimension. Our results demonstrate cognitive memory load prediction across four different levels with an overall accuracy of 92.5% during the memory task execution and reduce classification error to 7.61% in comparison to other state-of-art techniques.", "title": "" }, { "docid": "neg:1840215_12", "text": "In this work, we provide the first construction of Attribute-Based Encryption (ABE) for general circuits. Our construction is based on the existence of multilinear maps. We prove selective security of our scheme in the standard model under the natural multilinear generalization of the BDDH assumption. Our scheme achieves both Key-Policy and Ciphertext-Policy variants of ABE. Our scheme and its proof of security directly translate to the recent multilinear map framework of Garg, Gentry, and Halevi.", "title": "" }, { "docid": "neg:1840215_13", "text": "AIM\nThe purpose of the current study was to examine the effect of Astaxanthin (Asx) supplementation on muscle enzymes as indirect markers of muscle damage, oxidative stress markers and antioxidant response in elite young soccer players.\n\n\nMETHODS\nThirty-two male elite soccer players were randomly assigned in a double-blind fashion to Asx and placebo (P) group. After the 90 days of supplementation, the athletes performed a 2 hour acute exercise bout. Blood samples were obtained before and after 90 days of supplementation and after the exercise at the end of observational period for analysis of thiobarbituric acid-reacting substances (TBARS), advanced oxidation protein products (AOPP), superoxide anion (O2•¯), total antioxidative status (TAS), sulphydril groups (SH), superoxide-dismutase (SOD), serum creatine kinase (CK) and aspartate aminotransferase (AST).\n\n\nRESULTS\nTBARS and AOPP levels did not change throughout the study. Regular training significantly increased O2•¯ levels (main training effect, P<0.01). O2•¯ concentrations increased after the soccer exercise (main exercise effect, P<0.01), but these changes reached statistical significance only in the P group (exercise x supplementation effect, P<0.05). TAS levels decreased significantly post- exercise only in P group (P<0.01). Both Asx and P groups experienced increase in total SH groups content (by 21% and 9%, respectively) and supplementation effect was marginally significant (P=0.08). Basal SOD activity significantly decreased both in P and in Asx group by the end of the study (main training effect, P<0.01). All participants showed a significant decrease in basal CK and AST activities after 90 days (main training effect, P<0.01 and P<0.001, respectively). CK and AST activities in serum significantly increased as result of soccer exercise (main exercise effect, P<0.001 and P<0.01, respectively). Postexercise CK and AST levels were significantly lower in Asx group compared to P group (P<0.05)\n\n\nCONCLUSION\nThe results of the present study suggest that soccer training and soccer exercise are associated with excessive production of free radicals and oxidative stress, which might diminish antioxidant system efficiency. Supplementation with Asx could prevent exercise induced free radical production and depletion of non-enzymatic antioxidant defense in young soccer players.", "title": "" }, { "docid": "neg:1840215_14", "text": "Brain computer interface (BCI) is an assistive technology, which decodes neurophysiological signals generated by the human brain and translates them into control signals to control external devices, e.g., wheelchairs. One problem challenging noninvasive BCI technologies is the limited control dimensions from decoding movements of, mainly, large body parts, e.g., upper and lower limbs. It has been reported that complicated dexterous functions, i.e., finger movements, can be decoded in electrocorticography (ECoG) signals, while it remains unclear whether noninvasive electroencephalography (EEG) signals also have sufficient information to decode the same type of movements. Phenomena of broadband power increase and low-frequency-band power decrease were observed in EEG in the present study, when EEG power spectra were decomposed by a principal component analysis (PCA). These movement-related spectral structures and their changes caused by finger movements in EEG are consistent with observations in previous ECoG study, as well as the results from ECoG data in the present study. The average decoding accuracy of 77.11% over all subjects was obtained in classifying each pair of fingers from one hand using movement-related spectral changes as features to be decoded using a support vector machine (SVM) classifier. The average decoding accuracy in three epilepsy patients using ECoG data was 91.28% with the similarly obtained features and same classifier. Both decoding accuracies of EEG and ECoG are significantly higher than the empirical guessing level (51.26%) in all subjects (p<0.05). The present study suggests the similar movement-related spectral changes in EEG as in ECoG, and demonstrates the feasibility of discriminating finger movements from one hand using EEG. These findings are promising to facilitate the development of BCIs with rich control signals using noninvasive technologies.", "title": "" }, { "docid": "neg:1840215_15", "text": "Schelling (1969, 1971a,b, 1978) considered a simple proximity model of segregation where individual agents only care about the types of people living in their own local geographical neighborhood, the spatial structure being represented by oneor two-dimensional lattices. In this paper, we argue that segregation might occur not only in the geographical space, but also in social environments. Furthermore, recent empirical studies have documented that social interaction structures are well-described by small-world networks. We generalize Schelling’s model by allowing agents to interact in small-world networks instead of regular lattices. We study two alternative dynamic models where agents can decide to move either arbitrarily far away (global model) or are bound to choose an alternative location in their social neighborhood (local model). Our main result is that the system attains levels of segregation that are in line with those reached in the lattice-based spatial proximity model. Thus, Schelling’s original results seem to be robust to the structural properties of the network.", "title": "" }, { "docid": "neg:1840215_16", "text": "In this paper we address the problem of multivariate outlier detection using the (unsupervised) self-organizing map (SOM) algorithm introduced by Kohonen. We examine a number of techniques, based on summary statistics and graphics derived from the trained SOM, and conclude that they work well in cooperation with each other. Useful tools include the median interneuron distance matrix and the projection of the trained map (via Sammon’s mapping). SOM quantization errors provide an important complementary source of information for certain type of outlying behavior. Empirical results are reported on both artificial and real data.", "title": "" }, { "docid": "neg:1840215_17", "text": "Current approaches to conservation and natural-resource management often focus on single objectives, resulting in many unintended consequences. These outcomes often affect society through unaccounted-for ecosystem services. A major challenge in moving to a more ecosystem-based approach to management that would avoid such societal damages is the creation of practical tools that bring a scientifically sound, production function-based approach to natural-resource decision making. A new set of computer-based models is presented, the Integrated Valuation of Ecosystem Services and Tradeoffs tool (InVEST) that has been designed to inform such decisions. Several of the key features of these models are discussed, including the ability to visualize relationships among multiple ecosystem services and biodiversity, the ability to focus on ecosystem services rather than biophysical processes, the ability to project service levels and values in space, sensitivity to manager-designed scenarios, and flexibility to deal with data and knowledge limitations. Sample outputs of InVEST are shown for two case applications; the Willamette Basin in Oregon and the Amazon Basin. Future challenges relating to the incorporation of social data, the projection of social distributional effects, and the design of effective policy mechanisms are discussed.", "title": "" }, { "docid": "neg:1840215_18", "text": "The problem of planning safe tra-jectories for computer controlled manipulators with two movable links and multiple degrees of freedom is analyzed, and a solution to the problem proposed. The key features of the solution are: 1. the identification of trajectory primitives and a hierarchy of abstraction spaces that permit simple manip-ulator models, 2. the characterization of empty space by approximating it with easily describable entities called charts-the approximation is dynamic and can be selective, 3. a scheme for planning motions close to obstacles that is computationally viable, and that suggests how proximity sensors might be used to do the planning, and 4. the use of hierarchical decomposition to reduce the complexity of the planning problem. 1. INTRODUCTION the 2D and 3D solution noted, it is easy to visualize the solution for the 3D manipulator. Section 2 of this paper presents an example, and Section 3 a statement and analysis of the problem. Sections 4 and 5 present the solution. Section 6 summarizes the key ideas in the solution and indicates areas for future work. 2. AN EXAMPLE This section describes an example (Figure 2.1) of the collision detection and avoidance problem for a two-dimensional manipulator. The example highlights features of the problem and its solution. 2.1 The Problem The manipulator has two links and three degrees of freedom. The larger link, called the boom, slides back and forth and can rotate about the origin. The smaller link, called the forearm, has a rotational degree of freedom about the tip of the boom. The tip of the forearm is called the hand. S and G are the initial and final configurations of the manipulator. Any real manipulator's links will have physical dimensions. The line segment representation of the link is an abstraction; the physical dimensions can be accounted for and how this is done is described later. The problem of planning safe trajectories for computer controlled manipulators with two movable links and multiple degrees of freedom is analyzed, and a solution to the problem is presented. The trajectory planning system is initialized with a description of the part of the environment that the manipulator is to maneuver in. When given the goal position and orientation of the hand, the system plans a complete trajectory that will safely maneuver the manipulator into the goal configuration. The executive system in charge of operating the hardware uses this trajectory to physically move the manipulator. …", "title": "" } ]
1840216
Sentiment Strength Prediction Using Auxiliary Features
[ { "docid": "pos:1840216_0", "text": "In this paper, we investigate the utility of linguistic features for detecting the sentiment of Twitter messages. We evaluate the usefulness of existing lexical resources as well as features that capture information about the informal and creative language used in microblogging. We take a supervised approach to the problem, but leverage existing hashtags in the Twitter data for building training data.", "title": "" }, { "docid": "pos:1840216_1", "text": "We investigate a technique to adapt unsupervised word embeddings to specific applications, when only small and noisy labeled datasets are available. Current methods use pre-trained embeddings to initialize model parameters, and then use the labeled data to tailor them for the intended task. However, this approach is prone to overfitting when the training is performed with scarce and noisy data. To overcome this issue, we use the supervised data to find an embedding subspace that fits the task complexity. All the word representations are adapted through a projection into this task-specific subspace, even if they do not occur on the labeled dataset. This approach was recently used in the SemEval 2015 Twitter sentiment analysis challenge, attaining state-of-the-art results. Here we show results improving those of the challenge, as well as additional experiments in a Twitter Part-Of-Speech tagging task.", "title": "" }, { "docid": "pos:1840216_2", "text": "This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks.", "title": "" } ]
[ { "docid": "neg:1840216_0", "text": "A discrete-time analysis of the orthogonal frequency division multiplex/offset QAM (OFDM/OQAM) multicarrier modulation technique, leading to a modulated transmultiplexer, is presented. The conditions of discrete orthogonality are established with respect to the polyphase components of the OFDM/OQAM prototype filter, which is assumed to be symmetrical and with arbitrary length. Fast implementation schemes of the OFDM/OQAM modulator and demodulator are provided, which are based on the inverse fast Fourier transform. Non-orthogonal prototypes create intersymbol and interchannel interferences (ISI and ICI) that, in the case of a distortion-free transmission, are expressed by a closed-form expression. A large set of design examples is presented for OFDM/OQAM systems with a number of subcarriers going from four up to 2048, which also allows a comparison between different approaches to get well-localized prototypes.", "title": "" }, { "docid": "neg:1840216_1", "text": "Defects in safety critical processes can lead to accidents that result in harm to people or damage to property. Therefore, it is important to find ways to detect and remove defects from such processes. Earlier work has shown that Fault Tree Analysis (FTA) [3] can be effective in detecting safety critical process defects. Unfortunately, it is difficult to build a comprehensive set of Fault Trees for a complex process, especially if this process is not completely welldefined. The Little-JIL process definition language has been shown to be effective for defining complex processes clearly and precisely at whatever level of granularity is desired [1]. In this work, we present an algorithm for generating Fault Trees from Little-JIL process definitions. We demonstrate the value of this work by showing how FTA can identify safety defects in the process from which the Fault Trees were automatically derived.", "title": "" }, { "docid": "neg:1840216_2", "text": "Social media hype has created a lot of speculation among educators on how these media can be used to support learning, but there have been rather few studies so far. Our explorative interview study contributes by critically exploring how campus students perceive using social media to support their studies and the perceived benefits and limitations compared with other means. Although the vast majority of the respondents use social media frequently, a “digital dissonance” can be noted, because few of them feel that they use such media to support their studies. The interviewees mainly put forth e-mail and instant messaging, which are used among students to ask questions, coordinate group work and share files. Some of them mention using Wikipedia and YouTube for retrieving content and Facebook to initiate contact with course peers. Students regard social media as one of three key means of the educational experience, alongside face-to-face meetings and using the learning management systems, and are mainly used for brief questions and answers, and to coordinate group work. In conclusion, we argue that teaching strategy plays a key role in supporting students in moving from using social media to support coordination and information retrieval to also using such media for collaborative learning, when appropriate.", "title": "" }, { "docid": "neg:1840216_3", "text": "Data warehouses are used to store large amounts of data. This data is often used for On-Line Analytical Processing (OLAP). Short response times are essential for on-line decision support. Common approaches to reach this goal in read-mostly environments are the precomputation of materialized views and the use of index structures. In this paper, a framework is presented to evaluate different index structures analytically depending on nine parameters for the use in a data warehouse environment. The framework is applied to four different index structures to evaluate which structure works best for range queries. We show that all parameters influence the performance. Additionally, we show why bitmap index structures use modern disks better than traditional tree structures and why bitmaps will supplant the tree based index structures in the future.", "title": "" }, { "docid": "neg:1840216_4", "text": "Multipath TCP (MPTCP) suffers from the degradation of goodput in the presence of diverse network conditions on the available subflows. The goodput can even be worse than that of one regular TCP, undermining the advantage gained by using multipath transfer. In this work, we propose a new multipath TCP protocol, namely NC-MPTCP, which introduces network coding (NC) to some but not all subflows traveling from source to destination. At the core of our scheme is the mixed use of regular and NC subflows. Thus, the regular subflows deliver original data while the NC subflows deliver linear combinations of the original data. The idea is to take advantage of the redundant NC data to compensate for the lost or delayed data in order to avoid receive buffer becoming full. We design a packet scheduling algorithm and a redundancy estimation algorithm to allocate data among different subflows in order to optimize the overall goodput. We also give a guideline on how to choose the NC subflows among the available subflows. We evaluate the performance of NC-MPTCP through a NS-3 network simulator. The experiments show that NC-MPTCP achieves higher goodput compared to MPTCP in the presence of different subflow qualities. And in the worst case, the performance of NC-MPTCP is close to that of one regular TCP.", "title": "" }, { "docid": "neg:1840216_5", "text": "Distributed word embeddings have shown superior performances in numerous Natural Language Processing (NLP) tasks. However, their performances vary significantly across different tasks, implying that the word embeddings learnt by those methods capture complementary aspects of lexical semantics. Therefore, we believe that it is important to combine the existing word embeddings to produce more accurate and complete meta-embeddings of words. We model the meta-embedding learning problem as an autoencoding problem, where we would like to learn a meta-embedding space that can accurately reconstruct all source embeddings simultaneously. Thereby, the meta-embedding space is enforced to capture complementary information in different source embeddings via a coherent common embedding space. We propose three flavours of autoencoded meta-embeddings motivated by different requirements that must be satisfied by a meta-embedding. Our experimental results on a series of benchmark evaluations show that the proposed autoencoded meta-embeddings outperform the existing state-of-the-art metaembeddings in multiple tasks.", "title": "" }, { "docid": "neg:1840216_6", "text": "Two issues are increasingly of interests in the scientific literature regarding virtual reality induced unwanted side effects: (a) the latent structure of the Simulator Sickness Questionnaire (SSQ), and (b) the overlap between anxiety symptoms and unwanted side effects. Study 1 was conducted with a sample of 517 participants. A confirmatory factor analysis clearly supported a two-factor model composed of nausea and oculomotor symptoms. To tease-out symptoms of anxiety from negative side effects of immersion, Study 2 was conducted with 47 participants who were stressed without being immersed in virtual reality. Five of the 16 side effects correlated significantly with anxiety. In a third study conducted with 72 participants, the post-immersion scores of the SSQ and the State Anxiety scale were subjected to a factor analysis to detect items side effects that would load on the anxiety factor. The results converged with those of Study 2 and revealed that general discomfort and difficulty concentrating loaded significantly on the anxiety factor. The overall results support the notion that side effects consists mostly of a nausea and an oculomotor latent structure and that (only) a few items may reflect anxiety more than unwanted negative side effects.", "title": "" }, { "docid": "neg:1840216_7", "text": "Malicious payload injection attacks have been a serious threat to software for decades. Unfortunately, protection against these attacks remains challenging due to the ever increasing diversity and sophistication of payload injection and triggering mechanisms used by adversaries. In this paper, we develop A2C, a system that provides general protection against payload injection attacks. A2C is based on the observation that payloads are highly fragile and thus any mutation would likely break their functionalities. Therefore, A2C mutates inputs from untrusted sources. Malicious payloads that reside in these inputs are hence mutated and broken. To assure that the program continues to function correctly when benign inputs are provided, A2C divides the state space into exploitable and post-exploitable sub-spaces, where the latter is much larger than the former, and decodes the mutated values only when they are transmitted from the former to the latter. A2C does not rely on any knowledge of malicious payloads or their injection and triggering mechanisms. Hence, its protection is general. We evaluate A2C with 30 realworld applications, including apache on a real-world work-load, and our results show that A2C effectively prevents a variety of payload injection attacks on these programs with reasonably low overhead (6.94%).", "title": "" }, { "docid": "neg:1840216_8", "text": "In an experiment on Airbnb, we find that applications from guests with distinctively African-American names are 16% less likely to be accepted relative to identical guests with distinctively White names. Discrimination occurs among landlords of all sizes, including small landlords sharing the property and larger landlords with multiple properties. It is most pronounced among hosts who have never had an African-American guest, suggesting only a subset of hosts discriminate. While rental markets have achieved significant reductions in discrimination in recent decades, our results suggest that Airbnb’s current design choices facilitate discrimination and raise the possibility of erasing some of these civil rights gains.", "title": "" }, { "docid": "neg:1840216_9", "text": "Data integration is a key issue for any integrated set of software tools where each tool has its own data structures (at least on the conceptual level), but where we have many interdependencies between these private data structures. A typical CASE environment, for instance, offers tools for the manipulation of requirements and software design documents and provides more or less sophisticated assistance for keeping these documents in a consistent state. Up to now almost all of these data consistency observing or preserving integration tools are hand-crafted due to the lack of generic implementation frameworks and the absence of adequate specification formalisms. Triple graph grammars, a proper superset of pair grammars, are intended to fill this gap and to support the specification of interdependencies between graph-like data structures on a very high level. Furthermore, they form a solid fundament of a new machinery for the production of batch-oriented as well as incrementally working data integration tools.", "title": "" }, { "docid": "neg:1840216_10", "text": "The Paper presents the outlines of the Field Programmable Gate Array (FPGA) implementation of Real Time speech enhancement by Spectral Subtraction of acoustic noise using Dynamic Moving Average Method. It describes an stand alone algorithm for Speech Enhancement and presents a architecture for the implementation. The traditional Spectral Subtraction method can only suppress stationary acoustic noise from speech by subtracting the spectral noise bias calculated during non-speech activity, while adding the unique option of dynamic moving averaging to it, it can now periodically upgrade the estimation and cope up with changes in noise level. Signal to Noise Ratio (SNR) has been tested at different noisy environment and the improvement in SNR certifies the effectiveness of the algorithm. The FPGA implementation presented in this paper, works on streaming speech signals and can be used in factories, bus terminals, Cellular Phones, or in outdoor conferences where a large number of people have gathered. The Table in the Experimental Result section consolidates our claim of optimum resouce usage.", "title": "" }, { "docid": "neg:1840216_11", "text": "Although mind wandering occupies a large proportion of our waking life, its neural basis and relation to ongoing behavior remain controversial. We report an fMRI study that used experience sampling to provide an online measure of mind wandering during a concurrent task. Analyses focused on the interval of time immediately preceding experience sampling probes demonstrate activation of default network regions during mind wandering, a finding consistent with theoretical accounts of default network functions. Activation in medial prefrontal default network regions was observed both in association with subjective self-reports of mind wandering and an independent behavioral measure (performance errors on the concurrent task). In addition to default network activation, mind wandering was associated with executive network recruitment, a finding predicted by behavioral theories of off-task thought and its relation to executive resources. Finally, neural recruitment in both default and executive network regions was strongest when subjects were unaware of their own mind wandering, suggesting that mind wandering is most pronounced when it lacks meta-awareness. The observed parallel recruitment of executive and default network regions--two brain systems that so far have been assumed to work in opposition--suggests that mind wandering may evoke a unique mental state that may allow otherwise opposing networks to work in cooperation. The ability of this study to reveal a number of crucial aspects of the neural recruitment associated with mind wandering underscores the value of combining subjective self-reports with online measures of brain function for advancing our understanding of the neurophenomenology of subjective experience.", "title": "" }, { "docid": "neg:1840216_12", "text": "DNA microarray is an efficient new technology that allows to analyze, at the same time, the expression level of millions of genes. The gene expression level indicates the synthesis of different messenger ribonucleic acid (mRNA) molecule in a cell. Using this gene expression level, it is possible to diagnose diseases, identify tumors, select the best treatment to resist illness, detect mutations among other processes. In order to achieve that purpose, several computational techniques such as pattern classification approaches can be applied. The classification problem consists in identifying different classes or groups associated with a particular disease (e.g., various types of cancer, in terms of the gene expression level). However, the enormous quantity of genes and the few samples available, make difficult the processes of learning and recognition of any classification technique. Artificial neural networks (ANN) are computational models in artificial intelligence used for classifying, predicting and approximating functions. Among the most popular ones, we could mention the multilayer perceptron (MLP), the radial basis function neural netrtificial Bee Colony algorithm work (RBF) and support vector machine (SVM). The aim of this research is to propose a methodology for classifying DNA microarray. The proposed method performs a feature selection process based on a swarm intelligence algorithm to find a subset of genes that best describe a disease. After that, different ANN are trained using the subset of genes. Finally, four different datasets were used to validate the accuracy of the proposal and test the relevance of genes to correctly classify the samples of the disease. © 2015 Published by Elsevier B.V. 37 38 39 40 41 42 43 44 45 46 47 . Introduction DNA microarray is an essential technique in molecular biolgy that allows, at the same time, to know the expression level f millions of genes. The DNA microarray consists in immobilizing known deoxyribonucleic acid (DNA) molecule layout in a glass ontainer and then this information with other genetic informaion are hybridized. This process is the base to identify, classify or redict diseases such as different kind of cancer [1–4]. The process to obtain a DNA microarray is based on the comination of a healthy DNA reference with a testing DNA. Using Please cite this article in press as: B.A. Garro, et al., Classification of DNA Appl. Soft Comput. J. (2015), http://dx.doi.org/10.1016/j.asoc.2015.10. uorophores and a laser it is possible to generate a color spot matrix nd obtain quantitative values that represent the expression level f each gene [5]. This expression level is like a signature useful to ∗ Corresponding author. Tel.: +52 5556223899. E-mail addresses: beatriz.garro@iimas.unam.mx (B.A. Garro), atya.rodriguez@iimas.unam.mx (K. Rodríguez), ravem@lasallistas.org.mx R.A. Vázquez). ttp://dx.doi.org/10.1016/j.asoc.2015.10.002 568-4946/© 2015 Published by Elsevier B.V. 48 49 50 51 52 diagnose different diseases. Furthermore, it can be used to identify genes that modify their genetic expression when a medical treatment is applied, identify tumors and genes that make regulation genetic networks, detect mutations among other applications [6]. Computational techniques combined with DNA microarrays can generate efficient results. The classification of DNA microarrays can be divided into three stages: gene finding, class discovery, and class prediction [7,8]. The DNA microarray samples have millions of genes and selecting the best genes set in such a way that get a trustworthy classification is a difficult task. Nonetheless, the evolutionary and bio-inspired algorithms, such as genetic algorithm (GA) [9], particle swarm optimization (PSO) [10], bacterial foraging algorithm (BFA) [11] and fish school search (FSS) [12], are excellent options to solve this problem. However, the performance of these algorithms depends of the fitness function, the parameters of the algorithm, the search space complexity, convergence, etc. In microarrays using artificial neural networks and ABC algorithm, 002 general, the performance of these algorithms is very similar among them, but depends of adjusting carefully their parameters. Based on that, the criterion that we used to select the algorithm for finding the set of most relevant genes was in term of the number of 53 54 55 56", "title": "" }, { "docid": "neg:1840216_13", "text": "With the increasing adoption of cloud computing, a growing number of users outsource their datasets to cloud. To preserve privacy, the datasets are usually encrypted before outsourcing. However, the common practice of encryption makes the effective utilization of the data difficult. For example, it is difficult to search the given keywords in encrypted datasets. Many schemes are proposed to make encrypted data searchable based on keywords. However, keyword-based search schemes ignore the semantic representation information of users’ retrieval, and cannot completely meet with users search intention. Therefore, how to design a content-based search scheme and make semantic search more effective and context-aware is a difficult challenge. In this paper, we propose ECSED, a novel semantic search scheme based on the concept hierarchy and the semantic relationship between concepts in the encrypted datasets. ECSED uses two cloud servers. One is used to store the outsourced datasets and return the ranked results to data users. The other one is used to compute the similarity scores between the documents and the query and send the scores to the first server. To further improve the search efficiency, we utilize a tree-based index structure to organize all the document index vectors. We employ the multi-keyword ranked search over encrypted cloud data as our basic frame to propose two secure schemes. The experiment results based on the real world datasets show that the scheme is more efficient than previous schemes. We also prove that our schemes are secure under the known ciphertext model and the known background model.", "title": "" }, { "docid": "neg:1840216_14", "text": "2007S. Robson Walton Chair in Accounting, University of Arkansas 2007-2014; 2015-2016 Accounting Department Chair, University of Arkansas 2014Distinguished Professor, University of Arkansas 2005-2014 Professor, University of Arkansas 2005-2008 Ralph L. McQueen Chair in Accounting, University of Arkansas 2002-2005 Associate Professor, University of Kansas 1997-2002 Assistant Professor, University of Kansas", "title": "" }, { "docid": "neg:1840216_15", "text": "In this paper, we present the vision for an open, urban-scale wireless networking testbed, called CitySense, with the goal of supporting the development and evaluation of novel wireless systems that span an entire city. CitySense is currently under development and will consist of about 100 Linux-based embedded PCs outfitted with dual 802.11a/b/g radios and various sensors, mounted on buildings and streetlights across the city of Cambridge. CitySense takes its cue from citywide urban mesh networking projects, but will differ substantially in that nodes will be directly programmable by end users. The goal of CitySense is explicitly not to provide public Internet access, but rather to serve as a new kind of experimental apparatus for urban-scale distributed systems and networking research efforts. In this paper we motivate the need for CitySense and its potential to support a host of new research and application developments. We also outline the various engineering challenges of deploying such a testbed as well as the research challenges that we face when building and supporting such a system.", "title": "" }, { "docid": "neg:1840216_16", "text": "Humanitarian operations comprise a wide variety of activities. These activities differ in temporal and spatial scope, as well as objectives, target population and with respect to the delivered goods and services. Despite a notable variety of agendas of the humanitarian actors, the requirements on the supply chain and supporting logistics activities remain similar to a large extent. This motivates the development of a suitably generic reference model for supply chain processes in the context of humanitarian operations. Reference models have been used in commercial environments for a range of purposes, such as analysis of structural, functional, and behavioural properties of supply chains. Our process reference model aims to support humanitarian organisations when designing appropriately adapted supply chain processes to support their operations, visualising their processes, measuring their performance and thus, improving communication and coordination of organisations. A top-down approach is followed in which modular process elements are developed sequentially and relevant performance measures are identified. This contribution is conceptual in nature and intends to lay the foundation for future research.", "title": "" }, { "docid": "neg:1840216_17", "text": "Modern digital still cameras sample the color spectrum using a color filter array coated to the CCD array such that each pixel samples only one color channel. The result is a mosaic of color samples which is used to reconstruct the full color image by taking the information of the pixels’ neighborhood. This process is called demosaicking. While standard literature evaluates the performance of these reconstruction algorithms by comparison of a ground-truth image with a reconstructed Bayer pattern image in terms of grayscale comparison, this work gives an evaluation concept to asses the geometrical accuracy of the resulting color images. Only if no geometrical distortions are created during the demosaicking process, it is allowed to use such images for metric calculations, e.g. 3D reconstruction or arbitrary metrical photogrammetric processing.", "title": "" }, { "docid": "neg:1840216_18", "text": "We present a new kind of network perimeter monitoring strategy, which focuses on recognizing the infection and coordination dialog that occurs during a successful malware infection. BotHunter is an application designed to track the two-way communication flows between internal assets and external entities, developing an evidence trail of data exchanges that match a state-based infection sequence model. BotHunter consists of a correlation engine that is driven by three malware-focused network packet sensors, each charged with detecting specific stages of the malware infection process, including inbound scanning, exploit usage, egg downloading, outbound bot coordination dialog, and outbound attack propagation. The BotHunter correlator then ties together the dialog trail of inbound intrusion alarms with those outbound communication patterns that are highly indicative of successful local host infection. When a sequence of evidence is found to match BotHunter’s infection dialog model, a consolidated report is produced to capture all the relevant events and event sources that played a role during the infection process. We refer to this analytical strategy of matching the dialog flows between internal assets and the broader Internet as dialog-based correlation, and contrast this strategy to other intrusion detection and alert correlation methods. We present our experimental results using BotHunter in both virtual and live testing environments, and discuss our Internet release of the BotHunter prototype. BotHunter is made available both for operational use and to help stimulate research in understanding the life cycle of malware infections.", "title": "" }, { "docid": "neg:1840216_19", "text": "Direct volume rendering (DVR) is of increasing diagnostic value in the analysis of data sets captured using the latest medical imaging modalities. The deployment of DVR in everyday clinical work, however, has so far been limited. One contributing factor is that current transfer function (TF) models can encode only a small fraction of the user's domain knowledge. In this paper, we use histograms of local neighborhoods to capture tissue characteristics. This allows domain knowledge on spatial relations in the data set to be integrated into the TF. As a first example, we introduce partial range histograms in an automatic tissue detection scheme and present its effectiveness in a clinical evaluation. We then use local histogram analysis to perform a classification where the tissue-type certainty is treated as a second TF dimension. The result is an enhanced rendering where tissues with overlapping intensity ranges can be discerned without requiring the user to explicitly define a complex, multidimensional TF", "title": "" } ]
1840217
Credit Card Fraud Detection using Big Data Analytics: Use of PSOAANN based One-Class Classification
[ { "docid": "pos:1840217_0", "text": "Particle Swarm Optimization is a popular heuristic search algorithm which is inspired by the social learning of birds or fishes. It is a swarm intelligence technique for optimization developed by Eberhart and Kennedy [1] in 1995. Inertia weight is an important parameter in PSO, which significantly affects the convergence and exploration-exploitation trade-off in PSO process. Since inception of Inertia Weight in PSO, a large number of variations of Inertia Weight strategy have been proposed. In order to propose one or more than one Inertia Weight strategies which are efficient than others, this paper studies 15 relatively recent and popular Inertia Weight strategies and compares their performance on 05 optimization test problems.", "title": "" }, { "docid": "pos:1840217_1", "text": "Nonlinear principal component analysis is a novel technique for multivariate data analysis, similar to the well-known method of principal component analysis. NLPCA, like PCA, is used to identify and remove correlations among problem variables as an aid to dimensionality reduction, visualization, and exploratory data analysis. While PCA identifies only linear correlations between variables, NLPCA uncovers both linear and nonlinear correlations, without restriction on the character of the nonlinearities present in the data. NLPCA operates by training a feedforward neural network to perform the identity mapping, where the network inputs are reproduced at the output layer. The network contains an internal “bottleneck” layer (containing fewer nodes than input or output layers), which forces the network to develop a compact representation of the input data, and two additional hidden layers. The NLPCA method is demonstrated using time-dependent, simulated batch reaction data. Results show that NLPCA successfully reduces dimensionality and produces a feature space map resembling the actual distribution of the underlying system parameters.", "title": "" } ]
[ { "docid": "neg:1840217_0", "text": "Physical activity (PA) during physical education is important for health purposes and for developing physical fitness and movement skills. To examine PA levels and how PA was influenced by environmental and instructor-related characteristics, we assessed children’s activity during 368 lessons taught by 105 physical education specialists in 42 randomly selected schools in Hong Kong. Trained observers used SOFIT in randomly selected classes, grades 4–6, during three climatic seasons. Results indicated children’s PA levels met the U.S. Healthy People 2010 objective of 50% engagement time and were higher than comparable U.S. populations. Multiple regression analyses revealed that temperature, teacher behavior, and two lesson characteristics (subject matter and mode of delivery) were significantly associated with the PA levels. Most of these factors are modifiable, and changes could improve the quantity and intensity of children’s PA.", "title": "" }, { "docid": "neg:1840217_1", "text": "Back Side Illumination (BSI) CMOS image sensors with two-layer photo detectors (2LPDs) have been fabricated and evaluated. The test pixel array has green pixels (2.2um x 2.2um) and a magenta pixel (2.2um x 4.4um). The green pixel has a single-layer photo detector (1LPD). The magenta pixel has a 2LPD and a vertical charge transfer (VCT) path to contact a back side photo detector. The 2LPD and the VCT were implemented by high-energy ion implantation from the circuit side. Measured spectral response curves from the 2LPDs fitted well with those estimated based on lightabsorption theory for Silicon detectors. Our measurement results show that the keys to realize the 2LPD in BSI are; (1) the reduction of crosstalk to the VCT from adjacent pixels and (2) controlling the backside photo detector thickness variance to reduce color signal variations.", "title": "" }, { "docid": "neg:1840217_2", "text": "The MOND limit is shown to follow from a requirement of space-time scale invariance of the equations of motion for nonrelativistic, purely gravitational systems; i.e., invariance of the equations of motion under (t, r) → (λt, λr) in the limit a 0 → ∞. It is suggested that this should replace the definition of the MOND limit based on the asymptotic behavior of a Newtonian-MOND interpolating function. In this way, the salient, deep-MOND results–asymptotically flat rotation curves, the mass-rotational-speed relation (baryonic Tully-Fisher relation), the Faber-Jackson relation, etc.–follow from a symmetry principle. For example, asymptotic flatness of rotation curves reflects the fact that radii change under scaling, while velocities do not. I then comment on the interpretation of the deep-MOND limit as one of \" zero mass \" : Rest masses, whose presence obstructs scaling symmetry, become negligible compared to the \" phantom \" , dynamical masses–those that some would attribute to dark matter. Unlike the former masses, the latter transform in a way that is consistent with the symmetry. Finally, I discuss the putative MOND-cosmology connection, in particular the possibility that MOND-especially the deep-MOND limit– is related to the asymptotic de Sitter geometry of our universe. I point out, in this connection, the possible relevance of a (classical) de Sitter-conformal-field-theory (dS/CFT) correspondence.", "title": "" }, { "docid": "neg:1840217_3", "text": "A system that allows museums to build and manage Virtual and Augmented Reality exhibitions based on 3D models of artifacts is presented. Dynamic content creation based on pre-designed visualization templates allows content designers to create virtual exhibitions very efficiently. Virtual Reality exhibitions can be presented both inside museums, e.g. on touch-screen displays installed inside galleries and, at the same time, on the Internet. Additionally, the presentation based on Augmented Reality technologies allows museum visitors to interact with the content in an intuitive and exciting manner.", "title": "" }, { "docid": "neg:1840217_4", "text": "Existing techniques for interactive rendering of deformable translucent objects can accurately compute diffuse but not directional subsurface scattering effects. It is currently a common practice to gain efficiency by storing maps of transmitted irradiance. This is, however, not efficient if we need to store elements of irradiance from specific directions. To include changes in subsurface scattering due to changes in the direction of the incident light, we instead sample incident radiance and store scattered radiosity. This enables us to accommodate not only the common distance-based analytical models for subsurface scattering but also directional models. In addition, our method enables easy extraction of virtual point lights for transporting emergent light to the rest of the scene. Our method requires neither preprocessing nor texture parameterization of the translucent objects. To build our maps of scattered radiosity, we progressively render the model from different directions using an importance sampling pattern based on the optical properties of the material. We obtain interactive frame rates, our subsurface scattering results are close to ground truth, and our technique is the first to include interactive transport of emergent light from deformable translucent objects.", "title": "" }, { "docid": "neg:1840217_5", "text": "Gender is one of the most common attributes used to describe an individual. It is used in multiple domains such as human computer interaction, marketing, security, and demographic reports. Research has been performed to automate the task of gender recognition in constrained environment using face images, however, limited attention has been given to gender classification in unconstrained scenarios. This work attempts to address the challenging problem of gender classification in multi-spectral low resolution face images. We propose a robust Class Representative Autoencoder model, termed as AutoGen for the same. The proposed model aims to minimize the intra-class variations while maximizing the inter-class variations for the learned feature representations. Results on visible as well as near infrared spectrum data for different resolutions and multiple databases depict the efficacy of the proposed model. Comparative results with existing approaches and two commercial off-the-shelf systems further motivate the use of class representative features for classification.", "title": "" }, { "docid": "neg:1840217_6", "text": "A limiting factor for the application of IDA methods in many domains is the incompleteness of data repositories. Many records have fields that are not filled in, especially, when data entry is manual. In addition, a significant fraction of the entries can be erroneous and there may be no alternative but to discard these records. But every cell in a database is not an independent datum. Statistical relationships will constrain and, often determine, missing values. Data imputation, the filling in of missing values for partially missing data, can thus be an invaluable first step in many IDA projects. New imputation methods that can handle the large-scale problems and large-scale sparsity of industrial databases are needed. To illustrate the incomplete database problem, we analyze one database with instrumentation maintenance and test records for an industrial process. Despite regulatory requirements for process data collection, this database is less than 50% complete. Next, we discuss possible solutions to the missing data problem. Several approaches to imputation are noted and classified into two categories: data-driven and model-based. We then describe two machine-learning-based approaches that we have worked with. These build upon well-known algorithms: AutoClass and C4.5. Several experiments are designed, all using the maintenance database as a common test-bed but with various data splits and algorithmic variations. Results are generally positive with up to 80% accuracies of imputation. We conclude the paper by outlining some considerations in selecting imputation methods, and by discussing applications of data imputation for intelligent data analysis.", "title": "" }, { "docid": "neg:1840217_7", "text": "Autism spectrum disorder (ASD) is a wide-ranging collection of developmental diseases with varying symptoms and degrees of disability. Currently, ASD is diagnosed mainly with psychometric tools, often unable to provide an early and reliable diagnosis. Recently, biochemical methods are being explored as a means to meet the latter need. For example, an increased predisposition to ASD has been associated with abnormalities of metabolites in folate-dependent one carbon metabolism (FOCM) and transsulfuration (TS). Multiple metabolites in the FOCM/TS pathways have been measured, and statistical analysis tools employed to identify certain metabolites that are closely related to ASD. The prime difficulty in such biochemical studies comes from (i) inefficient determination of which metabolites are most important and (ii) understanding how these metabolites are collectively related to ASD. This paper presents a new method based on scores produced in Support Vector Machine (SVM) modeling combined with High Dimensional Model Representation (HDMR) sensitivity analysis. The new method effectively and efficiently identifies the key causative metabolites in FOCM/TS pathways, ranks their importance, and discovers their independent and correlative action patterns upon ASD. Such information is valuable not only for providing a foundation for a pathological interpretation but also for potentially providing an early, reliable diagnosis ideally leading to a subsequent comprehensive treatment of ASD. With only tens of SVM model runs, the new method can identify the combinations of the most important metabolites in the FOCM/TS pathways that lead to ASD. Previous efforts to find these metabolites required hundreds of thousands of model runs with the same data.", "title": "" }, { "docid": "neg:1840217_8", "text": "Measuring “how much the human is in the interaction” - the level of engagement - is instrumental in building effective interactive robots. Engagement, however, is a complex, multi-faceted cognitive mechanism that is only indirectly observable. This article formalizes with-me-ness as one of such indirect measures. With-me-ness, a concept borrowed from the field of Computer-Supported Collaborative Learning, measures in a well-defined way to what extent the human is with the robot over the course of an interactive task. As such, it is a meaningful precursor of engagement. We expose in this paper the full methodology, from real-time estimation of the human's focus of attention (relying on a novel, open-source, vision-based head pose estimator), to on-line computation of with-me-ness. We report as well on the experimental validation of this approach, using a naturalistic setup involving children during a complex robot-teaching task.", "title": "" }, { "docid": "neg:1840217_9", "text": "OBJECTIVE\nTo investigate a modulation of the N170 face-sensitive component related to the perception of other-race (OR) and same-race (SR) faces, as well as differences in face and non-face object processing, by combining different methods of event-related potential (ERP) signal analysis.\n\n\nMETHODS\nSixty-two channel ERPs were recorded in 12 Caucasian subjects presented with Caucasian and Asian faces along with non-face objects. Surface data were submitted to classical waveforms and ERP map topography analysis. Underlying brain sources were estimated with two inverse solutions (BESA and LORETA).\n\n\nRESULTS\nThe N170 face component was identical for both race faces. This component and its topography revealed a face specific pattern regardless of race. However, in this time period OR faces evoked significantly stronger medial occipital activity than SR faces. Moreover, in terms of maps, at around 170 ms face-specific activity significantly preceded non-face object activity by 25 ms. These ERP maps were followed by similar activation patterns across conditions around 190-300 ms, most likely reflecting the activation of visually derived semantic information.\n\n\nCONCLUSIONS\nThe N170 was not sensitive to the race of the faces. However, a possible pre-attentive process associated to the relatively stronger unfamiliarity for OR faces was found in medial occipital area. Moreover, our data provide further information on the time-course of face and non-face object processing.", "title": "" }, { "docid": "neg:1840217_10", "text": "Because of the precise temporal resolution of electrophysiological recordings, the event-related potential (ERP) technique has proven particularly valuable for testing theories of perception and attention. Here, I provide a brief tutorial on the ERP technique for consumers of such research and those considering the use of human electrophysiology in their own work. My discussion begins with the basics regarding what brain activity ERPs measure and why they are well suited to reveal critical aspects of perceptual processing, attentional selection, and cognition, which are unobservable with behavioral methods alone. I then review a number of important methodological issues and often-forgotten facts that should be considered when evaluating or planning ERP experiments.", "title": "" }, { "docid": "neg:1840217_11", "text": "The problem of achieving COnlUnCtlve goals has been central to domain-independent planning research, the nonhnear constraint-posting approach has been most successful Previous planners of this type have been comphcated, heurtstw, and ill-defined 1 have combmed and dtstdled the state of the art into a simple, precise, Implemented algorithm (TWEAK) which I have proved correct and complete 1 analyze previous work on domam-mdependent conlunctwe plannmg; tn retrospect tt becomes clear that all conluncttve planners, hnear and nonhnear, work the same way The efficiency and correctness of these planners depends on the traditional add/ delete-hst representation for actions, which drastically limits their usefulness I present theorems that suggest that efficient general purpose planning with more expressive action representations ts impossible, and suggest ways to avoid this problem", "title": "" }, { "docid": "neg:1840217_12", "text": "Two biological control agents, Bacillus subtilis AP-01 (Larminar(™)) and Trichoderma harzianum AP-001 (Trisan(™)) alone or/in combination were investigated in controlling three tobacco diseases, including bacterial wilt (Ralstonia solanacearum), damping-off (Pythium aphanidermatum), and frogeye leaf spot (Cercospora nicotiana). Tests were performed in greenhouse by soil sterilization prior to inoculation of the pathogens. Bacterial-wilt and damping off pathogens were drenched first and followed with the biological control agents and for comparison purposes, two chemical fungicides. But for frogeye leaf spot, which is an airborne fungus, a spraying procedure for every treatment including a chemical fungicide was applied instead of drenching. Results showed that neither B. subtilis AP-01 nor T harzianum AP-001 alone could control the bacterial wilt, but when combined, their controlling capabilities were as effective as a chemical treatment. These results were also similar for damping-off disease when used in combination. In addition, the combined B. subtilis AP-01 and T. harzianum AP-001 resulted in a good frogeye leaf spot control, which was not significantly different from the chemical treatment.", "title": "" }, { "docid": "neg:1840217_13", "text": "Today’s huge volumes of data, heterogeneous information and communication technologies, and borderless cyberinfrastructures create new challenges for security experts and law enforcement agencies investigating cybercrimes. The future of digital forensics is explored, with an emphasis on these challenges and the advancements needed to effectively protect modern societies and pursue cybercriminals.", "title": "" }, { "docid": "neg:1840217_14", "text": "The design of bioinspired systems for chemical sensing is an engaging line of research in machine olfaction. Developments in this line could increase the lifetime and sensitivity of artificial chemo-sensory systems. Such approach is based on the sensory systems known in live organisms, and the resulting developed artificial systems are targeted to reproduce the biological mechanisms to some extent. Sniffing behaviour, sampling odours actively, has been studied recently in neuroscience, and it has been suggested that the respiration frequency is an important parameter of the olfactory system, since the odour perception, especially in complex scenarios such as novel odourants exploration, depends on both the stimulus identity and the sampling method. In this work we propose a chemical sensing system based on an array of 16 metal-oxide gas sensors that we combined with an external mechanical ventilator to simulate the biological respiration cycle. The tested gas classes formed a relatively broad combination of two analytes, acetone and ethanol, in binary mixtures. Two sets of lowfrequency and high-frequency features were extracted from the acquired signals to show that the high-frequency features contain information related to the gas class. In addition, such information is available at early stages of the measurement, which could make the technique ∗Corresponding author. Email address: andrey.ziyatdinov@upc.edu (Andrey Ziyatdinov) Preprint submitted to Sensors and Actuators B: Chemical August 15, 2014 suitable in early detection scenarios. The full data set is made publicly available to the community.", "title": "" }, { "docid": "neg:1840217_15", "text": "The handshake gesture is an important part of the social etiquette in many cultures. It lies at the core of many human interactions, either in formal or informal settings: exchanging greetings, offering congratulations, and finalizing a deal are all activities that typically either start or finish with a handshake. The automated detection of a handshake can enable wide range of pervasive computing scanarios; in particular, different types of information can be exchanged and processed among the handshaking persons, depending on the physical/logical contexts where they are located and on their mutual acquaintance. This paper proposes a novel handshake detection system based on body sensor networks consisting of a resource-constrained wrist-wearable sensor node and a more capable base station. The system uses an effective collaboration technique among body sensor networks of the handshaking persons which minimizes errors associated with the application of classification algorithms and improves the overall accuracy in terms of the number of false positives and false negatives.", "title": "" }, { "docid": "neg:1840217_16", "text": "The main challenge of online multi-object tracking is to reliably associate object trajectories with detections in each video frame based on their tracking history. In this work, we propose the Recurrent Autoregressive Network (RAN), a temporal generative modeling framework to characterize the appearance and motion dynamics of multiple objects over time. The RAN couples an external memory and an internal memory. The external memory explicitly stores previous inputs of each trajectory in a time window, while the internal memory learns to summarize long-term tracking history and associate detections by processing the external memory. We conduct experiments on the MOT 2015 and 2016 datasets to demonstrate the robustness of our tracking method in highly crowded and occluded scenes. Our method achieves top-ranked results on the two benchmarks.", "title": "" }, { "docid": "neg:1840217_17", "text": "Corporates are entering the brave new world of the internet and digitization without much regard for the fine print of a growing regulation regime. More traditional outsourcing arrangements are already falling foul of the regulators as rules and supervision intensifies. Furthermore, ‘shadow IT’ is proliferating as the attractions of SaaS, mobile, cloud services, social media, and endless new ‘apps’ drive usage outside corporate IT. Initial cost-benefit analyses of the Cloud make such arrangements look immediately attractive but losing control of architecture, security, applications and deployment can have far reaching and damaging regulatory consequences. From research in financial services, this paper details the increasing body of regulations, their inherent risks for businesses and how the dangers can be pre-empted and managed. We then delineate a model for managing these risks specifically focused on investigating, strategizing and governing outsourcing arrangements and related regulatory obligations.", "title": "" }, { "docid": "neg:1840217_18", "text": "Approximate nonnegative matrix factorization is an emerging technique with a wide spectrum of potential applications in data analysis. Currently, the most-used algorithms for this problem are those proposed by Lee and Seung [7]. In this paper we present a variation of one of the Lee-Seung algorithms with a notably improved performance. We also show that algorithms of this type do not necessarily converge to local minima.", "title": "" }, { "docid": "neg:1840217_19", "text": "Orofacial clefts are common birth defects and can occur as isolated, nonsyndromic events or as part of Mendelian syndromes. There is substantial phenotypic diversity in individuals with these birth defects and their family members: from subclinical phenotypes to associated syndromic features that is mirrored by the many genes that contribute to the etiology of these disorders. Identification of these genes and loci has been the result of decades of research using multiple genetic approaches. Significant progress has been made recently due to advances in sequencing and genotyping technologies, primarily through the use of whole exome sequencing and genome-wide association studies. Future progress will hinge on identifying functional variants, investigation of pathway and other interactions, and inclusion of phenotypic and ethnic diversity in studies.", "title": "" } ]
1840218
Retrieving Similar Styles to Parse Clothing
[ { "docid": "pos:1840218_0", "text": "Researches have verified that clothing provides information about the identity of the individual. To extract features from the clothing, the clothing region first must be localized or segmented in the image. At the same time, given multiple images of the same person wearing the same clothing, we expect to improve the effectiveness of clothing segmentation. Therefore, the identity recognition and clothing segmentation problems are inter-twined; a good solution for one aides in the solution for the other. We build on this idea by analyzing the mutual information between pixel locations near the face and the identity of the person to learn a global clothing mask. We segment the clothing region in each image using graph cuts based on a clothing model learned from one or multiple images believed to be the same person wearing the same clothing. We use facial features and clothing features to recognize individuals in other images. The results show that clothing segmentation provides a significant improvement in recognition accuracy for large image collections, and useful clothing masks are simultaneously produced. A further significant contribution is that we introduce a publicly available consumer image collection where each individual is identified. We hope this dataset allows the vision community to more easily compare results for tasks related to recognizing people in consumer image collections.", "title": "" } ]
[ { "docid": "neg:1840218_0", "text": "Considering the increasingly complex media landscape and diversity of use, it is important to establish a common ground for identifying and describing the variety of ways in which people use new media technologies. Characterising the nature of media-user behaviour and distinctive user types is challenging and the literature offers little guidance in this regard. Hence, the present research aims to classify diverse user behaviours into meaningful categories of user types, according to the frequency of use, variety of use and content preferences. To reach a common framework, a review of the relevant research was conducted. An overview and meta-analysis of the literature (22 studies) regarding user typology was established and analysed with reference to (1) method, (2) theory, (3) media platform, (4) context and year, and (5) user types. Based on this examination, a unified Media-User Typology (MUT) is suggested. This initial MUT goes beyond the current research literature, by unifying all the existing and various user type models. A common MUT model can help the Human–Computer Interaction community to better understand both the typical users and the diversification of media-usage patterns more qualitatively. Developers of media systems can match the users’ preferences more precisely based on an MUT, in addition to identifying the target groups in the developing process. Finally, an MUT will allow a more nuanced approach when investigating the association between media usage and social implications such as the digital divide. 2010 Elsevier Ltd. All rights reserved. 1 Difficulties in understanding media-usage behaviour have also arisen because of", "title": "" }, { "docid": "neg:1840218_1", "text": "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (“avoiding side effects” and “avoiding reward hacking”), an objective function that is too expensive to evaluate frequently (“scalable supervision”), or undesirable behavior during the learning process (“safe exploration” and “distributional shift”). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking", "title": "" }, { "docid": "neg:1840218_2", "text": "Fingerprinting of mobile device apps is currently an attractive and affordable data gathering technique. Even in the presence of encryption, it is possible to fingerprint a user's app by means of packet-level traffic analysis in which side-channel information is used to determine specific patterns in packets. Knowing the specific apps utilized by smartphone users is a serious privacy concern. In this study, we address the issue of defending against statistical traffic analysis of Android apps. First, we present a methodology for the identification of mobile apps using traffic analysis. Further, we propose confusion models in which we obfuscate packet lengths information leaked by mobile traffic, and we shape one class of app traffic to obscure its class features with minimum overhead. We assess the efficiency of our model using different apps and against a recently published approach for mobile apps classification. We focus on making it hard for intruders to differentiate between the altered app traffic and the actual one using statistical analysis. Additionally, we study the tradeoff between shaping cost and traffic privacy protection, specifically the needed overhead and realization feasibility. We were able to attain 91.1% classification accuracy. Using our obfuscation technique, we were able to reduce this accuracy to 15.78%.", "title": "" }, { "docid": "neg:1840218_3", "text": "We propose noWorkflow, a tool that transparently captures provenance of scripts and enables reproducibility. Unlike existing approaches, noWorkflow is non-intrusive and does not require users to change the way they work – users need not wrap their experiments in scientific workflow systems, install version control systems, or instrument their scripts. The tool leverages Software Engineering techniques, such as abstract syntax tree analysis, reflection, and profiling, to collect different types of provenance, including detailed information about the underlying libraries. We describe how noWorkflow captures multiple kinds of provenance and the different classes of analyses it supports: graph-based visualization; differencing over provenance trails; and inference queries.", "title": "" }, { "docid": "neg:1840218_4", "text": "This paper presents a novel method for discovering causal relations between events encoded in text. In order to determine if two events from the same sentence are in a causal relation or not, we first build a graph representation of the sentence that encodes lexical, syntactic, and semantic information. In a second step, we automatically extract multiple graph patterns (or subgraphs) from such graph representations and sort them according to their relevance in determining the causality between two events from the same sentence. Finally, in order to decide if these events are causal or not, we train a binary classifier based on what graph patterns can be mapped to the graph representation associated with the two events. Our experimental results show that capturing the feature dependencies of causal event relations using a graph representation significantly outperforms an existing method that uses a flat representation of features.", "title": "" }, { "docid": "neg:1840218_5", "text": "The research work deals with an approach to perform texture and morphological based retrieval on a corpus of food grain images. The work has been carried out using Image Warping and Image analysis approach. The method has been employed to normalize food grain images and hence eliminating the effects of orientation using image warping technique with proper scaling. The images have been properly enhanced to reduce noise and blurring in image. Finally image has segmented applying proper segmentation methods so that edges may be detected effectively and thus rectification of the image has been done. The approach has been tested on sufficient number of food grain images of rice based on intensity, position and orientation. A digital image analysis algorithm based on color, morphological and textural features was developed to identify the six varieties rice seeds which are widely planted in Chhattisgarh region. Nine color and nine morphological and textural features were used for discriminant analysis. A back propagation neural network-based classifier was developed to identify the unknown grain types. The color and textural features were presented to the neural network for training purposes. The trained network was then used to identify the unknown grain types.", "title": "" }, { "docid": "neg:1840218_6", "text": "Bitcoin is a popular cryptocurrency that records all transactions in a distributed append-only public ledger called blockchain. The security of Bitcoin heavily relies on the incentive-compatible proof-of-work (PoW) based distributed consensus protocol, which is run by the network nodes called miners. In exchange for the incentive, the miners are expected to maintain the blockchain honestly. Since its launch in 2009, Bitcoin economy has grown at an enormous rate, and it is now worth about 150 billions of dollars. This exponential growth in the market value of bitcoins motivate adversaries to exploit weaknesses for profit, and researchers to discover new vulnerabilities in the system, propose countermeasures, and predict upcoming trends. In this paper, we present a systematic survey that covers the security and privacy aspects of Bitcoin. We start by giving an overview of the Bitcoin system and its major components along with their functionality and interactions within the system. We review the existing vulnerabilities in Bitcoin and its major underlying technologies such as blockchain and PoW-based consensus protocol. These vulnerabilities lead to the execution of various security threats to the standard functionality of Bitcoin. We then investigate the feasibility and robustness of the state-of-the-art security solutions. Additionally, we discuss the current anonymity considerations in Bitcoin and the privacy-related threats to Bitcoin users along with the analysis of the existing privacy-preserving solutions. Finally, we summarize the critical open challenges, and we suggest directions for future research towards provisioning stringent security and privacy solutions for Bitcoin.", "title": "" }, { "docid": "neg:1840218_7", "text": "In this paper, we propose a new algorithm to compute intrinsic means of organ shapes from 3D medical images. More specifically, we explore the feasibility of Karcher means in the framework of the large deformations by diffeomorphisms (LDDMM). This setting preserves the topology of the averaged shapes and has interesting properties to quantitatively describe their anatomical variability. Estimating Karcher means requires to perform multiple registrations between the averaged template image and the set of reference 3D images. Here, we use a recent algorithm based on an optimal control method to satisfy the geodesicity of the deformations at any step of each registration. We also combine this algorithm with organ specific metrics. We demonstrate the efficiency of our methodology with experimental results on different groups of anatomical 3D images. We also extensively discuss the convergence of our method and the bias due to the initial guess. A direct perspective of this work is the computation of 3D+time atlases.", "title": "" }, { "docid": "neg:1840218_8", "text": "We suggest analyzing neural networks through the prism of space constraints. We observe that most training algorithms applied in practice use bounded memory, which enables us to use a new notion introduced in the study of spacetime tradeoffs that we call mixing complexity. This notion was devised in order to measure the (in)ability to learn using a bounded-memory algorithm. In this paper we describe how we use mixing complexity to obtain new results on what can and cannot be learned using neural networks.", "title": "" }, { "docid": "neg:1840218_9", "text": "Extracting facial feature is a key step in facial expression recognition (FER). Inaccurate feature extraction very often results in erroneous categorizing of facial expressions. Especially in robotic application, environmental factors such as illumination variation may cause FER system to extract feature inaccurately. In this paper, we propose a robust facial feature point extraction method to recognize facial expression in various lighting conditions. Before extracting facial features, a face is localized and segmented from a digitized image frame. Face preprocessing stage consists of face normalization and feature region localization steps to extract facial features efficiently. As regions of interest corresponding to relevant features are determined, Gabor jets are applied based on Gabor wavelet transformation to extract the facial points. Gabor jets are more invariable and reliable than gray-level values, which suffer from ambiguity as well as illumination variation while representing local features. Each feature point can be matched by a phase-sensitivity similarity function in the relevant regions of interest. Finally, the feature values are evaluated from the geometric displacement of facial points. After tested using the AR face database and the database built in our lab, average facial expression recognition rates of 84.1% and 81.3% are obtained respectively.", "title": "" }, { "docid": "neg:1840218_10", "text": "Climate-smart agriculture is one of the techniques that maximizes agricultural outputs through proper management of inputs based on climatological conditions. Real-time weather monitoring system is an important tool to monitor the climatic conditions of a farm because many of the farms related problems can be solved by better understanding of the surrounding weather conditions. There are various designs of weather monitoring stations based on different technological modules. However, different monitoring technologies provide different data sets, thus creating vagueness in accuracy of the weather parameters measured. In this paper, a weather station was designed and deployed in an Edamame farm, and its meteorological data are compared with the commercial Davis Vantage Pro2 installed at the same farm. The results show that the lab-made weather monitoring system is equivalently efficient to measure various weather parameters. Therefore, the designed system welcomes low-income farmers to integrate it into their climate-smart farming practice.", "title": "" }, { "docid": "neg:1840218_11", "text": "This article addresses the problem of understanding mathematics described in natural language. Research in this area dates back to early 1960s. Several systems have so far been proposed to involve machines to solve mathematical problems of various domains like algebra, geometry, physics, mechanics, etc. This correspondence provides a state of the art technical review of these systems and approaches proposed by different research groups. A unified architecture that has been used in most of these approaches is identified and differences among the systems are highlighted. Significant achievements of each method are pointed out. Major strengths and weaknesses of the approaches are also discussed. Finally, present efforts and future trends in this research area are presented.", "title": "" }, { "docid": "neg:1840218_12", "text": "This survey paper categorises, compares, and summarises from almost all published technical and review articles in automated fraud detection within the last 10 years. It defines the professional fraudster, formalises the main types and subtypes of known fraud, and presents the nature of data evidence collected within affected industries. Within the business context of mining the data to achieve higher cost savings, this research presents methods and techniques together with their problems. Compared to all related reviews on fraud detection, this survey covers much more technical articles and is the only one, to the best of our knowledge, which proposes alternative data and solutions from related domains.", "title": "" }, { "docid": "neg:1840218_13", "text": "The human visual system is the most complex pattern recognition device known. In ways that are yet to be fully understood, the visual cortex arrives at a simple and unambiguous interpretation of data from the retinal image that is useful for the decisions and actions of everyday life. Recent advances in Bayesian models of computer vision and in the measurement and modeling of natural image statistics are providing the tools to test and constrain theories of human object perception. In turn, these theories are having an impact on the interpretation of cortical function.", "title": "" }, { "docid": "neg:1840218_14", "text": "OBJECTIVE\nThe incidence of neuroleptic malignant syndrome (NMS) is not known, but the frequency of its occurrence with conventional antipsychotic agents has been reported to vary from 0.02% to 2.44%.\n\n\nDATA SOURCES\nMEDLINE search conducted in January 2003 and review of references within the retrieved articles.\n\n\nDATA SYNTHESIS\nOur MEDLINE research yielded 68 cases (21 females and 47 males) of NMS associated with atypical antipsychotic drugs (clozapine, N = 21; risperidone, N = 23; olanzapine, N = 19; and quetiapine, N = 5). The fact that 21 cases of NMS with clozapine were found indicates that low occurrence of extrapyramidal symptoms (EPS) and low EPS-inducing potential do not prevent the occurrence of NMS and D(2) dopamine receptor blocking potential does not have direct correlation with the occurrence of NMS. One of the cardinal features of NMS is an increasing manifestation of EPS, and the conventional antipsychotic drugs are known to produce EPS in 95% or more of NMS cases. With atypical antipsychotic drugs, the incidence of EPS during NMS is of a similar magnitude.\n\n\nCONCLUSIONS\nFor NMS associated with atypical antipsychotic drugs, the mortality rate was lower than that with conventional antipsychotic drugs. However, the mortality rate may simply be a reflection of physicians' awareness and ensuing early treatment.", "title": "" }, { "docid": "neg:1840218_15", "text": "Type 2 diabetes is rapidly increasing in prevalence worldwide, in concert with epidemics of obesity and sedentary behavior that are themselves tracking economic development. Within this broad pattern, susceptibility to diabetes varies substantially in association with ethnicity and nutritional exposures through the life-course. An evolutionary perspective may help understand why humans are so prone to this condition in modern environments, and why this risk is unequally distributed. A simple conceptual model treats diabetes risk as the function of two interacting traits, namely ‘metabolic capacity’ which promotes glucose homeostasis, and ‘metabolic load’ which challenges glucose homoeostasis. This conceptual model helps understand how long-term and more recent trends in body composition can be considered to have shaped variability in diabetes risk. Hominin evolution appears to have continued a broader trend evident in primates, towards lower levels of muscularity. In addition, hominins developed higher levels of body fatness, especially in females in relative terms. These traits most likely evolved as part of a broader reorganization of human life history traits in response to growing levels of ecological instability, enabling both survival during tough periods and reproduction during bountiful periods. Since the emergence of Homo sapiens, populations have diverged in body composition in association with geographical setting and local ecological stresses. These long-term trends in both metabolic capacity and adiposity help explain the overall susceptibility of humans to diabetes in ways that are similar to, and exacerbated by, the effects of nutritional exposures during the life-course.", "title": "" }, { "docid": "neg:1840218_16", "text": "Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.", "title": "" }, { "docid": "neg:1840218_17", "text": "We describe a framework for cooperative control of a group of nonholonomic mobile robots that allows us to build complex systems from simple controllers and estimators. The resultant modular approach is attractive because of the potential for reusability. Our approach to composition also guarantees stability and convergence in a wide range of tasks. There are two key features in our approach: 1) a paradigm for switching between simple decentralized controllers that allows for changes in formation; 2) the use of information from a single type of sensor, an omnidirectional camera, for all our controllers. We describe estimators that abstract the sensory information at different levels, enabling both decentralized and centralized cooperative control. Our results include numerical simulations and experiments using a testbed consisting of three nonholonomic robots.", "title": "" }, { "docid": "neg:1840218_18", "text": "Recently, applying the novel data mining techniques for financial time-series forecasting has received much research attention. However, most researches are for the US and European markets, with only a few for Asian markets. This research applies Support-Vector Machines (SVMs) and Back Propagation (BP) neural networks for six Asian stock markets and our experimental results showed the superiority of both models, compared to the early researches.", "title": "" }, { "docid": "neg:1840218_19", "text": "The ecosystem of open source software (OSS) has been growing considerably in size. In addition, code clones - code fragments that are copied and pasted within or between software systems - are also proliferating. Although code cloning may expedite the process of software development, it often critically affects the security of software because vulnerabilities and bugs can easily be propagated through code clones. These vulnerable code clones are increasing in conjunction with the growth of OSS, potentially contaminating many systems. Although researchers have attempted to detect code clones for decades, most of these attempts fail to scale to the size of the ever-growing OSS code base. The lack of scalability prevents software developers from readily managing code clones and associated vulnerabilities. Moreover, most existing clone detection techniques focus overly on merely detecting clones and this impairs their ability to accurately find \"vulnerable\" clones. In this paper, we propose VUDDY, an approach for the scalable detection of vulnerable code clones, which is capable of detecting security vulnerabilities in large software programs efficiently and accurately. Its extreme scalability is achieved by leveraging function-level granularity and a length-filtering technique that reduces the number of signature comparisons. This efficient design enables VUDDY to preprocess a billion lines of code in 14 hour and 17 minutes, after which it requires a few seconds to identify code clones. In addition, we designed a security-aware abstraction technique that renders VUDDY resilient to common modifications in cloned code, while preserving the vulnerable conditions even after the abstraction is applied. This extends the scope of VUDDY to identifying variants of known vulnerabilities, with high accuracy. In this study, we describe its principles and evaluate its efficacy and effectiveness by comparing it with existing mechanisms and presenting the vulnerabilities it detected. VUDDY outperformed four state-of-the-art code clone detection techniques in terms of both scalability and accuracy, and proved its effectiveness by detecting zero-day vulnerabilities in widely used software systems, such as Apache HTTPD and Ubuntu OS Distribution.", "title": "" } ]
1840219
A survey of Context-aware Recommender System and services
[ { "docid": "pos:1840219_0", "text": "Contextual factors can greatly influence the users' preferences in listening to music. Although it is hard to capture these factors directly, it is possible to see their effects on the sequence of songs liked by the user in his/her current interaction with the system. In this paper, we present a context-aware music recommender system which infers contextual information based on the most recent sequence of songs liked by the user. Our approach mines the top frequent tags for songs from social tagging Web sites and uses topic modeling to determine a set of latent topics for each song, representing different contexts. Using a database of human-compiled playlists, each playlist is mapped into a sequence of topics and frequent sequential patterns are discovered among these topics. These patterns represent frequent sequences of transitions between the latent topics representing contexts. Given a sequence of songs in a user's current interaction, the discovered patterns are used to predict the next topic in the playlist. The predicted topics are then used to post-filter the initial ranking produced by a traditional recommendation algorithm. Our experimental evaluation suggests that our system can help produce better recommendations in comparison to a conventional recommender system based on collaborative or content-based filtering. Furthermore, the topic modeling approach proposed here is also useful in providing better insight into the underlying reasons for song selection and in applications such as playlist construction and context prediction.", "title": "" }, { "docid": "pos:1840219_1", "text": "Increasingly manufacturers of smartphone devices are utilising a diverse range of sensors. This innovation has enabled developers to accurately determine a user's current context. In recent years there has also been a renewed requirement to use more types of context and reduce the current over-reliance on location as a context. Location based systems have enjoyed great success and this context is very important for mobile devices. However, using additional context data such as weather, time, social media sentiment and user preferences can provide a more accurate model of the user's current context. One area that has been significantly improved by the increased use of context in mobile applications is tourism. Traditionally tour guide applications rely heavily on location and essentially ignore other types of context. This has led to problems of inappropriate suggestions, due to inadequate content filtering and tourists experiencing information overload. These problems can be mitigated if appropriate personalisation and content filtering is performed. The intelligent decision making that this paper proposes with regard to the development of the VISIT [17] system, is a hybrid based recommendation approach made up of collaborative filtering, content based recommendation and demographic profiling. Intelligent reasoning will then be performed as part of this hybrid system to determine the weight/importance of each different context type.", "title": "" } ]
[ { "docid": "neg:1840219_0", "text": "Introduction Blepharophimosis, ptosis and epicanthus inversus (BPES) is an autosomal dominant condition caused by mutations in the FOXL2 gene, located on chromosome 3q23 (Beysen et al., 2005). It has frequently been reported in association with 3q chromosomal deletions and rearrangements (Franceschini et al., 1983; Al-Awadi et al., 1986; Alvarado et al., 1987; Jewett et al., 1993; Costa et al., 1998). In a review of the phenotype associated with interstitial deletions of the long arm of chromosome 3, Alvarado et al. (1987) commented on brachydactyly and abnormalities of the hands and feet. A syndrome characterized by growth and developmental delay, microcephaly, ptosis, blepharophimosis, large abnormally shaped ears, and a prominent nasal tip was suggested to be associated with deletion of the 3q2 region.", "title": "" }, { "docid": "neg:1840219_1", "text": "Very recently, some studies on neural dependency parsers have shown advantage over the traditional ones on a wide variety of languages. However, for graphbased neural dependency parsing systems, they either count on the long-term memory and attention mechanism to implicitly capture the high-order features or give up the global exhaustive inference algorithms in order to harness the features over a rich history of parsing decisions. The former might miss out the important features for specific headword predictions without the help of the explicit structural information, and the latter may suffer from the error propagation as false early structural constraints are used to create features when making future predictions. We explore the feasibility of explicitly taking high-order features into account while remaining the main advantage of global inference and learning for graph-based parsing. The proposed parser first forms an initial parse tree by head-modifier predictions based on the first-order factorization. High-order features (such as grandparent, sibling, and uncle) then can be defined over the initial tree, and used to refine the parse tree in an iterative fashion. Experimental results showed that our model (called INDP) archived competitive performance to existing benchmark parsers on both English and Chinese datasets.", "title": "" }, { "docid": "neg:1840219_2", "text": "A fundamental question in human memory is how the brain represents sensory-specific information during the process of retrieval. One hypothesis is that regions of sensory cortex are reactivated during retrieval of sensory-specific information (1). Here we report findings from a study in which subjects learned a set of picture and sound items and were then given a recall test during which they vividly remembered the items while imaged by using event-related functional MRI. Regions of visual and auditory cortex were activated differentially during retrieval of pictures and sounds, respectively. Furthermore, the regions activated during the recall test comprised a subset of those activated during a separate perception task in which subjects actually viewed pictures and heard sounds. Regions activated during the recall test were found to be represented more in late than in early visual and auditory cortex. Therefore, results indicate that retrieval of vivid visual and auditory information can be associated with a reactivation of some of the same sensory regions that were activated during perception of those items.", "title": "" }, { "docid": "neg:1840219_3", "text": "We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvement. We study RPL in five challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently.", "title": "" }, { "docid": "neg:1840219_4", "text": "In this paper, we forecast the reading of an air quality monitoring station over the next 48 hours, using a data-driven method that considers current meteorological data, weather forecasts, and air quality data of the station and that of other stations within a few hundred kilometers. Our predictive model is comprised of four major components: 1) a linear regression-based temporal predictor to model the local factors of air quality, 2) a neural network-based spatial predictor to model global factors, 3) a dynamic aggregator combining the predictions of the spatial and temporal predictors according to meteorological data, and 4) an inflection predictor to capture sudden changes in air quality. We evaluate our model with data from 43 cities in China, surpassing the results of multiple baseline methods. We have deployed a system with the Chinese Ministry of Environmental Protection, providing 48-hour fine-grained air quality forecasts for four major Chinese cities every hour. The forecast function is also enabled on Microsoft Bing Map and MS cloud platform Azure. Our technology is general and can be applied globally for other cities.", "title": "" }, { "docid": "neg:1840219_5", "text": "Text mining has been defined as “the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources” [6]. Many other industries and areas can also benefit from the text mining tools that are being developed by a number of companies. This paper provides an overview of the text mining tools and technologies that are being developed and is intended to be a guide for organizations who are looking for the most appropriate text mining techniques for their situation. This paper also concentrates to design text and data mining tool to extract the valuable information from curriculum vitae according to concerned requirements. The tool clusters the curriculum vitae into several segments which will help the public and private concerns for their recruitment. Rule based approach is used to develop the algorithm for mining and also it is implemented to extract the valuable information from the curriculum vitae on the web. Analysis of Curriculum vitae is until now, a costly and manual activity. It is subject to all typical variations and limitations in its quality, depending of who is doing it. Automating this analysis using algorithms might deliver much more consistency and preciseness to support the human experts. The experiments involve cooperation with many people having their CV online, as well as several recruiters etc. The algorithms must be developed and improved for processing of existing sets of semi-structured documents information retrieval under uncertainity about quality of the sources.", "title": "" }, { "docid": "neg:1840219_6", "text": "Research in learning analytics and educational data mining has recently become prominent in the fields of computer science and education. Most scholars in the field emphasize student learning and student data analytics; however, it is also important to focus on teaching analytics and teacher preparation because of their key roles in student learning, especially in K-12 learning environments. Nonverbal communication strategies play an important role in successful interpersonal communication of teachers with their students. In order to assist novice or practicing teachers with exhibiting open and affirmative nonverbal cues in their classrooms, we have designed a multimodal teaching platform with provisions for online feedback. We used an interactive teaching rehearsal software, TeachLivE, as our basic research environment. TeachLivE employs a digital puppetry paradigm as its core technology. Individuals walk into this virtual environment and interact with virtual students displayed on a large screen. They can practice classroom management, pedagogy and content delivery skills with a teaching plan in the TeachLivE environment. We have designed an experiment to evaluate the impact of an online nonverbal feedback application. In this experiment, different types of multimodal data have been collected during two experimental settings. These data include talk-time and nonverbal behaviors of the virtual students, captured in log files; talk time and full body tracking data of the participant; and video recording of the virtual classroom with the participant. 34 student teachers participated in this 30-minute experiment. In each of the settings, the participants were provided with teaching plans from which they taught. All the participants took part in both of the experimental settings. In order to have a balanced experiment design, half of the participants received nonverbal online feedback in their first session and the other half received this feedback in the second session. A visual indication was used for feedback each time the participant exhibited a closed, defensive posture. Based on recorded full-body tracking data, we observed that only those who received feedback in their first session demonstrated a significant number of open postures in the session containing no feedback. However, the post-questionnaire information indicated that all participants were more mindful of their body postures while teaching after they had participated in the study.", "title": "" }, { "docid": "neg:1840219_7", "text": "A novel proximity-coupled probe-fed stacked patch antenna is proposed for Global Navigation Satellite Systems (GNSS) applications. The antenna has been designed to operate for the satellite navigation frequencies in L-band including GPS, GLONASS, Galileo, and Compass (1164-1239 MHz and 1559-1610 MHz). A key feature of our design is the proximity-coupled probe feeds to increase impedance bandwidth and the integrated 90deg broadband balun to improve polarization purity. The final antenna exhibits broad pattern coverage, high gain at the low angles (more than -5 dBi), and VSWR <1.5 for all the operating bands. The design procedures and employed tuning techniques to achieve the desired performance are presented.", "title": "" }, { "docid": "neg:1840219_8", "text": "Biopharmaceutical companies attempting to increase productivity through novel discovery technologies have fallen short of achieving the desired results. Repositioning existing drugs for new indications could deliver the productivity increases that the industry needs while shifting the locus of production to biotechnology companies. More and more companies are scanning the existing pharmacopoeia for repositioning candidates, and the number of repositioning success stories is increasing.", "title": "" }, { "docid": "neg:1840219_9", "text": "For applications in navigation and robotics, estimating the 3D pose of objects is as important as detection. Many approaches to pose estimation rely on detecting or tracking parts or keypoints [11, 21]. In this paper we build on a recent state-of-the-art convolutional network for sliding-window detection [10] to provide detection and rough pose estimation in a single shot, without intermediate stages of detecting parts or initial bounding boxes. While not the first system to treat pose estimation as a categorization problem, this is the first attempt to combine detection and pose estimation at the same level using a deep learning approach. The key to the architecture is a deep convolutional network where scores for the presence of an object category, the offset for its location, and the approximate pose are all estimated on a regular grid of locations in the image. The resulting system is as accurate as recent work on pose estimation (42.4% 8 View mAVP on Pascal 3D+ [21] ) and significantly faster (46 frames per second (FPS) on a TITAN X GPU). This approach to detection and rough pose estimation is fast and accurate enough to be widely applied as a pre-processing step for tasks including high-accuracy pose estimation, object tracking and localization, and vSLAM.", "title": "" }, { "docid": "neg:1840219_10", "text": "Compression enables us to shift resource bottlenecks between IO and CPU. In modern datacenters, where energy efficiency is a growing concern, the benefits of using compression have not been completely exploited. As MapReduce represents a common computation framework for Internet datacenters, we develop a decision algorithm that helps MapReduce users identify when and where to use compression. For some jobs, using compression gives energy savings of up to 60%. We believe our findings will provide signficant impact on improving datacenter energy efficiency.", "title": "" }, { "docid": "neg:1840219_11", "text": "The paper proposes a high torque density design for variable flux machines with Alnico magnets. The proposed design uses tangentially magnetized magnets in order to achieve high air gap flux density and to avoid demagnetization by the armature field. Barriers are also inserted in the rotor to limit the armature flux and to allow the machine to utilize both reluctance and magnet torque components. An analytical procedure is first applied to obtain the initial machine design parameters. Then several modifications are applied to the stator and rotor designs through finite element simulations (FEA) in order to improve the machine efficiency and torque density.", "title": "" }, { "docid": "neg:1840219_12", "text": "We survey the field of quantum information theory. In particular, we discuss the fundamentals of the field, source coding, quantum error-correcting codes, capacities of quantum channels, measures of entanglement, and quantum cryptography.", "title": "" }, { "docid": "neg:1840219_13", "text": "INTRODUCTION. When deciding about the cause underlying serially presented events, patients with delusions utilise fewer events than controls, showing a \"Jumping-to-Conclusions\" bias. This has been widely hypothesised to be because patients expect to incur higher costs if they sample more information. This hypothesis is, however, unconfirmed. METHODS. The hypothesis was tested by analysing patient and control data using two models. The models provided explicit, quantitative variables characterising decision making. One model was based on calculating the potential costs of making a decision; the other compared a measure of certainty to a fixed threshold. RESULTS. Differences between paranoid participants and controls were found, but not in the way that was previously hypothesised. A greater \"noise\" in decision making (relative to the effective motivation to get the task right), rather than greater perceived costs, best accounted for group differences. Paranoid participants also deviated from ideal Bayesian reasoning more than healthy controls. CONCLUSIONS. The Jumping-to-Conclusions Bias is unlikely to be due to an overestimation of the cost of gathering more information. The analytic approach we used, involving a Bayesian model to estimate the parameters characterising different participant populations, is well suited to testing hypotheses regarding \"hidden\" variables underpinning observed behaviours.", "title": "" }, { "docid": "neg:1840219_14", "text": "We present DIALDroid, a scalable and accurate tool for analyzing inter-app Inter-Component Communication (ICC) among Android apps, which outperforms current stateof-the-art ICC analysis tools. Using DIALDroid, we performed the first large-scale detection of collusive and vulnerable apps based on inter-app ICC data flows among 110,150 real-world apps and identified key security insights.", "title": "" }, { "docid": "neg:1840219_15", "text": "We present a telerobotics research platform that provides complete access to all levels of control via open-source electronics and software. The electronics employs an FPGA to enable a centralized computation and distributed I/O architecture in which all control computations are implemented in a familiar development environment (Linux PC) and low-latency I/O is performed over an IEEE-1394a (FireWire) bus at speeds up to 400 Mbits/sec. The mechanical components are obtained from retired first-generation da Vinci ® Surgical Systems. This system is currently installed at 11 research institutions, with additional installations underway, thereby creating a research community around a common open-source hardware and software platform.", "title": "" }, { "docid": "neg:1840219_16", "text": "Time-of-flight (TOF) measurement capability promises to improve PET image quality. We characterized the physical and clinical PET performance of the first Biograph mCT TOF PET/CT scanner (Siemens Medical Solutions USA, Inc.) in comparison with its predecessor, the Biograph TruePoint TrueV. In particular, we defined the improvements with TOF. The physical performance was evaluated according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standard with additional measurements to specifically address the TOF capability. Patient data were analyzed to obtain the clinical performance of the scanner. As expected for the same size crystal detectors, a similar spatial resolution was measured on the mCT as on the TruePoint TrueV. The mCT demonstrated modestly higher sensitivity (increase by 19.7 ± 2.8%) and peak noise equivalent count rate (NECR) (increase by 15.5 ± 5.7%) with similar scatter fractions. The energy, time and spatial resolutions for a varying single count rate of up to 55 Mcps resulted in 11.5 ± 0.2% (FWHM), 527.5 ± 4.9 ps (FWHM) and 4.1 ± 0.0 mm (FWHM), respectively. With the addition of TOF, the mCT also produced substantially higher image contrast recovery and signal-to-noise ratios in a clinically-relevant phantom geometry. The benefits of TOF were clearly demonstrated in representative patient images.", "title": "" }, { "docid": "neg:1840219_17", "text": "BACKGROUND\nReducing delays for patients who are safe to be discharged is important for minimising complications, managing costs and improving quality. Barriers to discharge include placement, multispecialty coordination of care and ineffective communication. There are a few recent studies that describe barriers from the perspective of all members of the multidisciplinary team.\n\n\nSTUDY OBJECTIVE\nTo identify the barriers to discharge for patients from our medicine service who had a discharge delay of over 24 hours.\n\n\nMETHODOLOGY\nWe developed and implemented a biweekly survey that was reviewed with attending physicians on each of the five medicine services to identify patients with an unnecessary delay. Separately, we conducted interviews with staff members involved in the discharge process to identify common barriers they observed on the wards.\n\n\nRESULTS\nOver the study period from 28 October to 22 November 2013, out of 259 total discharges, 87 patients had a delay of over 24 hours (33.6%) and experienced a total of 181 barriers. The top barriers from the survey included patient readiness, prolonged wait times for procedures or results, consult recommendations and facility placement. A total of 20 interviews were conducted, from which the top barriers included communication both between staff members and with the patient, timely notification of discharge and lack of discharge standardisation.\n\n\nCONCLUSIONS\nThere are a number of frequent barriers to discharge encountered in our hospital that may be avoidable with planning, effective communication methods, more timely preparation and tools to standardise the discharge process.", "title": "" }, { "docid": "neg:1840219_18", "text": "A large number of papers are appearing in the biomedical engineering literature that describe the use of machine learning techniques to develop classifiers for detection or diagnosis of disease. However, the usefulness of this approach in developing clinically validated diagnostic techniques so far has been limited and the methods are prone to overfitting and other problems which may not be immediately apparent to the investigators. This commentary is intended to help sensitize investigators as well as readers and reviewers of papers to some potential pitfalls in the development of classifiers, and suggests steps that researchers can take to help avoid these problems. Building classifiers should be viewed not simply as an add-on statistical analysis, but as part and parcel of the experimental process. Validation of classifiers for diagnostic applications should be considered as part of a much larger process of establishing the clinical validity of the diagnostic technique.", "title": "" }, { "docid": "neg:1840219_19", "text": "Pevzner and Sze [23] considered a precise version of the motif discovery problem and simultaneously issued an algorithmic challenge: find a motif M of length 15, where each planted instance differs from M in 4 positions. Whereas previous algorithms all failed to solve this (15,4)-motif problem. Pevzner and Sze introduced algorithms that succeeded. However, their algorithms failed to solve the considerably more difficult (14,4)-, (16,5)-, and (18,6)-motif problems.\nWe introduce a novel motif discovery algorithm based on the use of random projections of the input's substrings. Experiments on simulated data demonstrate that this algorithm performs better than existing algorithms and, in particular, typically solves the difficult (14,4)-, (16,5)-, and (18,6)-motif problems quite efficiently. A probabilistic estimate shows that the small values of d for which the algorithm fails to recover the planted (l, d)-motif are in all likelihood inherently impossible to solve. We also present experimental results on realistic biological data by identifying ribosome binding sites in prokaryotes as well as a number of known transcriptional regulatory motifs in eukaryotes.", "title": "" } ]
1840220
A review of stator fault monitoring techniques of induction motors
[ { "docid": "pos:1840220_0", "text": "The study gives a synopsis over condition monitoring methods both as a diagnostic tool and as a technique for failure identification in high voltage induction motors in industry. New running experience data for 483 motor units with 6135 unit years are registered and processed statistically, to reveal the connection between motor data, protection and condition monitoring methods, maintenance philosophy and different types of failures. The different types of failures are further analyzed to failure-initiators, -contributors and -underlying causes. The results have been compared with those of a previous survey, IEEE Report of Large Motor Reliability Survey of Industrial and Commercial Installations, 1985. In the present survey the motors are in the range of 100 to 1300 kW, 47% of them between 100 and 500 kW.", "title": "" } ]
[ { "docid": "neg:1840220_0", "text": "Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a \"visual Turing test\": an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (\"just-in-time truthing\"). The test is then administered to the computer-vision system, one question at a time. After the system's answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers-the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects.", "title": "" }, { "docid": "neg:1840220_1", "text": "The Islamic State of Iraq and ash-Sham (ISIS) continues to use social media as an essential element of its campaign to motivate support. On Twitter, ISIS' unique ability to leverage unaffiliated sympathizers that simply retweet propaganda has been identified as a primary mechanism in their success in motivating both recruitment and \"lone wolf\" attacks. The present work explores a large community of Twitter users whose activity supports ISIS propaganda diffusion in varying degrees. Within this ISIS supporting community, we observe a diverse range of actor types, including fighters, propagandists, recruiters, religious scholars, and unaffiliated sympathizers. The interaction between these users offers unique insight into the people and narratives critical to ISIS' sustainment. In their entirety, we refer to this diverse set of users as an online extremist community or OEC. We present Iterative Vertex Clustering and Classification (IVCC), a scalable analytic approach for OEC detection in annotated heterogeneous networks, and provide an illustrative case study of an online community of over 22,000 Twitter users whose online behavior directly advocates support for ISIS or contibutes to the group's propaganda dissemination through retweets.", "title": "" }, { "docid": "neg:1840220_2", "text": "This paper describes some of the possibilities of artificial neural networks that open up after solving the problem of catastrophic forgetting. A simple model and reinforcement learning applications of existing methods are also proposed", "title": "" }, { "docid": "neg:1840220_3", "text": "We present a new replay-based method of continual classification learning that we term \"conditional replay\" which generates samples and labels together by sampling from a distribution conditioned on the class. We compare conditional replay to another replay-based continual learning paradigm (which we term \"marginal replay\") that generates samples independently of their class and assigns labels in a separate step. The main improvement in conditional replay is that labels for generated samples need not be inferred, which reduces the margin for error in complex continual classification learning tasks. We demonstrate the effectiveness of this approach using novel and standard benchmarks constructed from MNIST and Fashion MNIST data, and compare to the regularization-based EWC method (Kirkpatrick et al., 2016; Shin et al., 2017).", "title": "" }, { "docid": "neg:1840220_4", "text": "This paper explores the phenomena of the emergence of the use of artificial intelligence in teaching and learning in higher education. It investigates educational implications of emerging technologies on the way students learn and how institutions teach and evolve. Recent technological advancements and the increasing speed of adopting new technologies in higher education are explored in order to predict the future nature of higher education in a world where artificial intelligence is part of the fabric of our universities. We pinpoint some challenges for institutions of higher education and student learning in the adoption of these technologies for teaching, learning, student support, and administration and explore further directions for research.", "title": "" }, { "docid": "neg:1840220_5", "text": "x Acknowledgements xii Chapter", "title": "" }, { "docid": "neg:1840220_6", "text": "In the present study, we tested the hypothesis that a carbohydrate-protein (CHO-Pro) supplement would be more effective in the replenishment of muscle glycogen after exercise compared with a carbohydrate supplement of equal carbohydrate content (LCHO) or caloric equivalency (HCHO). After 2.5 +/- 0.1 h of intense cycling to deplete the muscle glycogen stores, subjects (n = 7) received, using a rank-ordered design, a CHO-Pro (80 g CHO, 28 g Pro, 6 g fat), LCHO (80 g CHO, 6 g fat), or HCHO (108 g CHO, 6 g fat) supplement immediately after exercise (10 min) and 2 h postexercise. Before exercise and during 4 h of recovery, muscle glycogen of the vastus lateralis was determined periodically by nuclear magnetic resonance spectroscopy. Exercise significantly reduced the muscle glycogen stores (final concentrations: 40.9 +/- 5.9 mmol/l CHO-Pro, 41.9 +/- 5.7 mmol/l HCHO, 40.7 +/- 5.0 mmol/l LCHO). After 240 min of recovery, muscle glycogen was significantly greater for the CHO-Pro treatment (88.8 +/- 4.4 mmol/l) when compared with the LCHO (70.0 +/- 4.0 mmol/l; P = 0.004) and HCHO (75.5 +/- 2.8 mmol/l; P = 0.013) treatments. Glycogen storage did not differ significantly between the LCHO and HCHO treatments. There were no significant differences in the plasma insulin responses among treatments, although plasma glucose was significantly lower during the CHO-Pro treatment. These results suggest that a CHO-Pro supplement is more effective for the rapid replenishment of muscle glycogen after exercise than a CHO supplement of equal CHO or caloric content.", "title": "" }, { "docid": "neg:1840220_7", "text": "In recent years, the IoT(Internet of Things)has been wide spreadly concerned as a new technology architecture integrated by many information technologies. However, IoT is a complex system. If there is no intensive study from the system point of view, it will be very hard to understand the key technologies of IoT. in order to better study IoT and its technologies this paper proposes an extent six-layer architecture of IoT based on network hierarchical structure firstly. then it discusses the key technologies involved in every layer of IoT, such as RFID, WSN, Internet, SOA, cloud computing and Web Service etc. According to this, an automatic recognition system is designed via integrating both RFID and WSN and a strategy is also brought forward to via integrating both RFID and Web Sevice. This result offers the feasibility of technology integration of IoT.", "title": "" }, { "docid": "neg:1840220_8", "text": "“Microbiology Topics” discusses various topics in microbiology of practical use in validation and compliance. We intend this column to be a useful resource for daily work applications. Reader comments, questions, and suggestions are needed to help us fulfill our objective for this column. Please send your comments and suggestions to column coordinator Scott Sutton at scott. sutton@microbiol.org or journal managing editor Susan Haigney at shaigney@advanstar.com.", "title": "" }, { "docid": "neg:1840220_9", "text": "A fault-tolerant (FT) control approach for four-wheel independently-driven (4WID) electric vehicles is presented. An adaptive control based passive fault-tolerant controller is designed to ensure the system stability when an in-wheel motor/motor driver fault happens. As an over-actuated system, it is challenging to isolate the faulty wheel and accurately estimate the control gain of the faulty in-wheel motor for 4WID electric vehicles. An active fault diagnosis approach is thus proposed to isolate and evaluate the fault. Based on the estimated control gain of the faulty in-wheel motor, the control efforts of all the four wheels are redistributed to relieve the torque demand on the faulty wheel. Simulations using a high-fidelity, CarSim, full-vehicle model show the effectiveness of the proposed in-wheel motor/motor driver fault diagnosis and fault-tolerant control approach.", "title": "" }, { "docid": "neg:1840220_10", "text": "Metastasis leads to poor prognosis in colorectal cancer patients, and there is a growing need for new therapeutic targets. TMEM16A (ANO1, DOG1 or TAOS2) has recently been identified as a calcium-activated chloride channel (CaCC) and is reported to be overexpressed in several malignancies; however, its expression and function in colorectal cancer (CRC) remains unclear. In this study, we found expression of TMEM16A mRNA and protein in high-metastatic-potential SW620, HCT116 and LS174T cells, but not in primary HCT8 and SW480 cells, using RT-PCR, western blotting and immunofluorescence labeling. Patch-clamp recordings detected CaCC currents regulated by intracellular Ca(2+) and voltage in SW620 cells. Knockdown of TMEM16A by short hairpin RNAs (shRNA) resulted in the suppression of growth, migration and invasion of SW620 cells as detected by MTT, wound-healing and transwell assays. Mechanistically, TMEM16A depletion was accompanied by the dysregulation of phospho-MEK, phospho-ERK1/2 and cyclin D1 expression. Flow cytometry analysis showed that SW620 cells were inhibited from the G1 to S phase of the cell cycle in the TMEM16A shRNA group compared with the control group. In conclusion, our results indicate that TMEM16A CaCC is involved in growth, migration and invasion of metastatic CRC cells and provide evidence for TMEM16A as a potential drug target for treating metastatic colorectal carcinoma.", "title": "" }, { "docid": "neg:1840220_11", "text": "This paper provides a comprehensive survey of the technical achievements in the research area of Image Retrieval , especially Content-Based Image Retrieval, an area so active and prosperous in the past few years. The survey includes 100+ papers covering the research aspects of image feature representation and extraction, multi-dimensional indexing, and system design, three of the fundamental bases of Content-Based Image Retrieval. Furthermore, based on the state-of-the-art technology available now and the demand from real-world applications, open research issues are identiied, and future promising research directions are suggested.", "title": "" }, { "docid": "neg:1840220_12", "text": "In this paper, we propose a new descriptor for texture classification that is robust to image blurring. The descriptor utilizes phase information computed locally in a window for every image position. The phases of the four low-frequency coefficients are decorrelated and uniformly quantized in an eight-dimensional space. A histogram of the resulting code words is created and used as a feature in texture classification. Ideally, the low-frequency phase components are shown to be invariant to centrally symmetric blur. Although this ideal invariance is not completely achieved due to the finite window size, the method is still highly insensitive to blur. Because only phase information is used, the method is also invariant to uniform illumination changes. According to our experiments, the classification accuracy of blurred texture images is much higher with the new method than with the well-known LBP or Gabor filter bank methods. Interestingly, it is also slightly better for textures that are not blurred.", "title": "" }, { "docid": "neg:1840220_13", "text": "The emerging ability to comply with caregivers' dictates and to monitor one's own behavior accordingly signifies a major growth of early childhood. However, scant attention has been paid to the developmental course of self-initiated regulation of behavior. This article summarizes the literature devoted to early forms of control and highlights the different philosophical orientations in the literature. Then, focusing on the period from early infancy to the beginning of the preschool years, the author proposes an ontogenetic perspective tracing the kinds of modulation or control the child is capable of along the way. The developmental sequence of monitoring behaviors that is proposed calls attention to contributions made by the growth of cognitive skills. The role of mediators (e.g., caregivers) is also discussed.", "title": "" }, { "docid": "neg:1840220_14", "text": "We present a single-chip fully compliant Bluetooth radio fabricated in a digital 130-nm CMOS process. The transceiver is architectured from the ground up to be compatible with digital deep-submicron CMOS processes and be readily integrated with a digital baseband and application processor. The conventional RF frequency synthesizer architecture, based on the voltage-controlled oscillator and the phase/frequency detector and charge-pump combination, has been replaced with a digitally controlled oscillator and a time-to-digital converter, respectively. The transmitter architecture takes advantage of the wideband frequency modulation capability of the all-digital phase-locked loop with built-in automatic compensation to ensure modulation accuracy. The receiver employs a discrete-time architecture in which the RF signal is directly sampled and processed using analog and digital signal processing techniques. The complete chip also integrates power management functions and a digital baseband processor. Application of the presented ideas has resulted in significant area and power savings while producing structures that are amenable to migration to more advanced deep-submicron processes, as they become available. The entire IC occupies 10 mm/sup 2/ and consumes 28 mA during transmit and 41 mA during receive at 1.5-V supply.", "title": "" }, { "docid": "neg:1840220_15", "text": "The functions of rewards are based primarily on their effects on behavior and are less directly governed by the physics and chemistry of input events as in sensory systems. Therefore, the investigation of neural mechanisms underlying reward functions requires behavioral theories that can conceptualize the different effects of rewards on behavior. The scientific investigation of behavioral processes by animal learning theory and economic utility theory has produced a theoretical framework that can help to elucidate the neural correlates for reward functions in learning, goal-directed approach behavior, and decision making under uncertainty. Individual neurons can be studied in the reward systems of the brain, including dopamine neurons, orbitofrontal cortex, and striatum. The neural activity can be related to basic theoretical terms of reward and uncertainty, such as contiguity, contingency, prediction error, magnitude, probability, expected value, and variance.", "title": "" }, { "docid": "neg:1840220_16", "text": "Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.", "title": "" }, { "docid": "neg:1840220_17", "text": "Global clock distribution for multi-GHz microprocessors has become increasingly difficult and time-consuming to design. As the frequency of the global clock continues to increase, the timing uncertainty introduced by the clock network − the skew and jitter − must reduce proportional to the clock period. However, the clock skew and jitter for conventional, buffered H-trees are proportional to latency, which has increased for recent generations of microprocessors. A global clock network that uses standing waves and coupled oscillators has the potential to significantly reduce both skew and jitter. Standing waves have the unique property that phase does not depend on position, meaning that there is ideally no skew. They have previously been used for board-level clock distribution, on coaxial cables, and on superconductive wires but have never been implemented on-chip due to the large losses of on-chip interconnects. Networks of coupled oscillators have a phase-averaging effect that reduces both skew and jitter. However, none of the previous implementations of coupled-oscillator clock networks use standing waves and some require considerable circuitry to couple the oscillators. In this thesis, a global clock network that incorporates standing waves and coupled oscillators to distribute a high-frequency clock signal with low skew and low jitter is", "title": "" }, { "docid": "neg:1840220_18", "text": "Prediction without justification has limited utility. Much of the success of neural models can be attributed to their ability to learn rich, dense and expressive representations. While these representations capture the underlying complexity and latent trends in the data, they are far from being interpretable. We propose a novel variant of denoising k-sparse autoencoders that generates highly efficient and interpretable distributed word representations (word embeddings), beginning with existing word representations from state-of-the-art methods like GloVe and word2vec. Through large scale human evaluation, we report that our resulting word embedddings are much more interpretable than the original GloVe and word2vec embeddings. Moreover, our embeddings outperform existing popular word embeddings on a diverse suite of benchmark downstream tasks.", "title": "" }, { "docid": "neg:1840220_19", "text": "Community detection is an important task in network analysis. A community (also referred to as a cluster) is a set of cohesive vertices that have more connections inside the set than outside. In many social and information networks, these communities naturally overlap. For instance, in a social network, each vertex in a graph corresponds to an individual who usually participates in multiple communities. In this paper, we propose an efficient overlapping community detection algorithm using a seed expansion approach. The key idea of our algorithm is to find good seeds, and then greedily expand these seeds based on a community metric. Within this seed expansion method, we investigate the problem of how to determine good seed nodes in a graph. In particular, we develop new seeding strategies for a personalized PageRank clustering scheme that optimizes the conductance community score. An important step in our method is the neighborhood inflation step where seeds are modified to represent their entire vertex neighborhood. Experimental results show that our seed expansion algorithm outperforms other state-of-the-art overlapping community detection methods in terms of producing cohesive clusters and identifying ground-truth communities. We also show that our new seeding strategies are better than existing strategies, and are thus effective in finding good overlapping communities in real-world networks.", "title": "" } ]
1840221
Supporting Drivers in Keeping Safe Speed and Safe Distance: The SASPENCE Subproject Within the European Framework Programme 6 Integrating Project PReVENT
[ { "docid": "pos:1840221_0", "text": "To improve the safety and comfort of a human-machine system, the machine needs to ‘know,’ in a real time manner, the human operator in the system. The machine’s assistance to the human can be fine tuned if the machine is able to sense the human’s state and intent. Related to this point, this paper discusses issues of human trust in automation, automation surprises, responsibility and authority. Examples are given of a driver assistance system for advanced automobile.", "title": "" }, { "docid": "pos:1840221_1", "text": "Development of advanced driver assistance systems with vehicle hardware-in-the-loop simulations , \" (Received 00 Month 200x; In final form 00 Month 200x) This paper presents a new method for the design and validation of advanced driver assistance systems (ADASs). With vehicle hardware-in-the-loop (VEHIL) simulations the development process, and more specifically the validation phase, of intelligent vehicles is carried out safer, cheaper, and more manageable. In the VEHIL laboratory a full-scale ADAS-equipped vehicle is set up in a hardware-in-the-loop simulation environment, where a chassis dynamometer is used to emulate the road interaction and robot vehicles to represent other traffic. In this controlled environment the performance and dependability of an ADAS is tested to great accuracy and reliability. The working principle and the added value of VEHIL are demonstrated with test results of an adaptive cruise control and a forward collision warning system. Based on the 'V' diagram, the position of VEHIL in the development process of ADASs is illustrated.", "title": "" } ]
[ { "docid": "neg:1840221_0", "text": "Augmented reality technologies allow people to view and interact with virtual objects that appear alongside physical objects in the real world. For augmented reality applications to be effective, users must be able to accurately perceive the intended real world location of virtual objects. However, when creating augmented reality applications, developers are faced with a variety of design decisions that may affect user perceptions regarding the real world depth of virtual objects. In this paper, we conducted two experiments using a perceptual matching task to understand how shading, cast shadows, aerial perspective, texture, dimensionality (i.e., 2D vs. 3D shapes) and billboarding affected participant perceptions of virtual object depth relative to real world targets. The results of these studies quantify trade-offs across virtual object designs to inform the development of applications that take advantage of users' visual abilities to better blend the physical and virtual world.", "title": "" }, { "docid": "neg:1840221_1", "text": "As more and more data is provided in RDF format, storing huge amounts of RDF data and efficiently processing queries on such data is becoming increasingly important. The first part of the lecture will introduce state-of-the-art techniques for scalably storing and querying RDF with relational systems, including alternatives for storing RDF, efficient index structures, and query optimization techniques. As centralized RDF repositories have limitations in scalability and failure tolerance, decentralized architectures have been proposed. The second part of the lecture will highlight system architectures and strategies for distributed RDF processing. We cover search engines as well as federated query processing, highlight differences to classic federated database systems, and discuss efficient techniques for distributed query processing in general and for RDF data in particular. Moreover, for the last part of this chapter, we argue that extracting knowledge from the Web is an excellent showcase – and potentially one of the biggest challenges – for the scalable management of uncertain data we have seen so far. The third part of the lecture is thus intended to provide a close-up on current approaches and platforms to make reasoning (e.g., in the form of probabilistic inference) with uncertain RDF data scalable to billions of triples. 1 RDF in centralized relational databases The increasing availability and use of RDF-based information in the last decade has led to an increasing need for systems that can store RDF and, more importantly, efficiencly evaluate complex queries over large bodies of RDF data. The database community has developed a large number of systems to satisfy this need, partly reusing and adapting well-established techniques from relational databases [122]. The majority of these systems can be grouped into one of the following three classes: 1. Triple stores that store RDF triples in a single relational table, usually with additional indexes and statistics, 2. vertically partitioned tables that maintain one table for each property, and 3. Schema-specific solutions that store RDF in a number of property tables where several properties are jointly represented. In the following sections, we will describe each of these classes in detail, focusing on two important aspects of these systems: storage and indexing, i.e., how are RDF triples mapped to relational tables and which additional support structures are created; and query processing, i.e., how SPARQL queries are mapped to SQL, which additional operators are introduced, and how efficient execution plans for queries are determined. In addition to these purely relational solutions, a number of specialized RDF systems has been proposed that built on nonrelational technologies, we will briefly discuss some of these systems. Note that we will focus on SPARQL processing, which is not aware of underlying RDF/S or OWL schema and cannot exploit any information about subclasses; this is usually done in an additional layer on top. We will explain especially the different storage variants with the running example from Figure 1, some simple RDF facts from a university scenario. Here, each line corresponds to a fact (triple, statement), with a subject (usually a resource), a property (or predicate), and an object (which can be a resource or a constant). Even though resources are represented by URIs in RDF, we use string constants here for simplicity. A collection of RDF facts can also be represented as a graph. Here, resources (and constants) are nodes, and for each fact <s,p,o>, an edge from s to o is added with label p. Figure 2 shows the graph representation for the RDF example from Figure 1. <Katja,teaches,Databases> <Katja,works_for,MPI Informatics> <Katja,PhD_from,TU Ilmenau> <Martin,teaches,Databases> <Martin,works_for,MPI Informatics> <Martin,PhD_from,Saarland University> <Ralf,teaches,Information Retrieval> <Ralf,PhD_from,Saarland University> <Ralf,works_for,Saarland University> <Saarland University,located_in,Germany> <MPI Informatics,located_in,Germany> Fig. 1. Running example for RDF data", "title": "" }, { "docid": "neg:1840221_2", "text": "Existing neural machine translation systems do not explicitly model what has been translated and what has not during the decoding phase. To address this problem, we propose a novel mechanism that separates the source information into two parts: translated Past contents and untranslated Future contents, which are modeled by two additional recurrent layers. The Past and Future contents are fed to both the attention model and the decoder states, which provides Neural Machine Translation (NMT) systems with the knowledge of translated and untranslated contents. Experimental results show that the proposed approach significantly improves the performance in Chinese-English, German-English, and English-German translation tasks. Specifically, the proposed model outperforms the conventional coverage model in terms of both the translation quality and the alignment error rate.", "title": "" }, { "docid": "neg:1840221_3", "text": "It is relatively stress-free to write about computer games as nothing too much has been said yet, and almost anything goes. The situation is pretty much the same when it comes to writing about games and gaming in general. The sad fact with alarming cumulative consequences is that they are undertheorized; there are Huizinga, Caillois and Ehrmann of course, and libraries full of board game studies,in addition to game theory and bits and pieces of philosophy—most notably those of Wittgenstein— but they won’t get us very far with computer games. So if there already is or soon will be a legitimate field for computer game studies, this field is also very open to intrusions and colonisations from the already organized scholarly tribes. Resisting and beating them is the goal of our first survival game in this paper, as what these emerging studies need is independence, or at least relative independence.", "title": "" }, { "docid": "neg:1840221_4", "text": "State-of-the-art methods for relation classification are mostly based on statistical machine learning, and the performance heavily depends on the quality of the extracted features. The extracted features are often derived from the output of pre-existing NLP systems, which lead to the error propagation of existing tools and hinder the performance of the system. In this paper, we exploit a convolutional Deep Neural Network (DNN) to extract lexical and sentence level features. Our method takes all the word tokens as input without complicated pre-processing. First, all the word tokens are transformed to vectors by looking up word embeddings1. Then, lexical level features are extracted according to the given nouns. Meanwhile, sentence level features are learned using a convolutional approach. These two level features are concatenated as the final extracted feature vector. Finally, the features are feed into a softmax classifier to predict the relationship between two marked nouns. Experimental results show that our approach significantly outperforms the state-of-the-art methods.", "title": "" }, { "docid": "neg:1840221_5", "text": "Faster R-CNN achieves state-of-the-art performance on generic object detection. However, a simple application of this method to a large vehicle dataset performs unimpressively. In this paper, we take a closer look at this approach as it applies to vehicle detection. We conduct a wide range of experiments and provide a comprehensive analysis of the underlying structure of this model. We show that through suitable parameter tuning and algorithmic modification, we can significantly improve the performance of Faster R-CNN on vehicle detection and achieve competitive results on the KITTI vehicle dataset. We believe our studies are instructive for other researchers investigating the application of Faster R-CNN to their problems and datasets.", "title": "" }, { "docid": "neg:1840221_6", "text": "To assess whether the passive leg raising test can help in predicting fluid responsiveness. Nonsystematic review of the literature. Passive leg raising has been used as an endogenous fluid challenge and tested for predicting the hemodynamic response to fluid in patients with acute circulatory failure. This is now easy to perform at the bedside using methods that allow a real time measurement of systolic blood flow. A passive leg raising induced increase in descending aortic blood flow of at least 10% or in echocardiographic subaortic flow of at least 12% has been shown to predict fluid responsiveness. Importantly, this prediction remains very valuable in patients with cardiac arrhythmias or spontaneous breathing activity. Passive leg raising allows reliable prediction of fluid responsiveness even in patients with spontaneous breathing activity or arrhythmias. This test may come to be used increasingly at the bedside since it is easy to perform and effective, provided that its effects are assessed by a real-time measurement of cardiac output.", "title": "" }, { "docid": "neg:1840221_7", "text": "Rice is one of the most cultivated cereal in Asian countries and Vietnam in particular. Good seed germination is important for rice seed quality, that impacts the rice production and crop yield. Currently, seed germination evaluation is carried out manually by experienced persons. This is a tedious and time-consuming task. In this paper, we present a system for automatic evaluation of rice seed germination rate based on advanced techniques in computer vision and machine learning. We propose to use U-Net - a convolutional neural network - for segmentation and separation of rice seeds. Further processing such as computing distance transform and thresholding will be applied on the segmented regions for rice seed detection. Finally, ResNet is utilized to classify segmented rice seed regions into two classes: germinated and non- germinated seeds. Our contributions in this paper are three-fold. Firstly, we propose a framework which confirms that convolutional neural networks are better than traditional methods for both segmentation and classification tasks (with F1- scores of 93.38\\% and 95.66\\% respectively). Secondly, we deploy successfully the automatic tool in a real application for estimating rice germination rate. Finally, we introduce a new dataset of 1276 images of rice seeds from 7 to 8 seed varieties germinated during 6 to 10 days. This dataset is publicly available for research purpose.", "title": "" }, { "docid": "neg:1840221_8", "text": "Integrated Design of Agile Missile Guidance and Autopilot Systems By P. K. Menon and E. J. Ohlmeyer Abstract Recent threat assessments by the US Navy have indicated the need for improving the accuracy of defensive missiles. This objective can only be achieved by enhancing the performance of the missile subsystems and by finding methods to exploit the synergism existing between subsystems. As a first step towards the development of integrated design methodologies, this paper develops a technique for integrated design of missile guidance and autopilot systems. Traditional approach for the design of guidance and autopilot systems has been to design these subsystems separately and then to integrate them together before verifying their performance. Such an approach does not exploit any beneficial relationships between these and other subsystems. The application of the feedback linearization technique for integrated guidance-autopilot system design is discussed. Numerical results using a six degree-of-freedom missile simulation are given. Integrated guidance-autopilot systems are expected to result in significant improvements in missile performance, leading to lower weight and enhanced lethality. Both of these factors will lead to a more effective, lower-cost weapon system. Integrated system design methods developed under the present research effort also have extensive applications in high performance aircraft autopilot and guidance systems.", "title": "" }, { "docid": "neg:1840221_9", "text": "A polynomial filtered Davidson-type algorithm is proposed for symmetric eigenproblems, in which the correction-equation of the Davidson approach is replaced by a polynomial filtering step. The new approach has better global convergence and robustness properties when compared with standard Davidson-type methods. The typical filter used in this paper is based on Chebyshev polynomials. The goal of the polynomial filter is to amplify components of the desired eigenvectors in the subspace, which has the effect of reducing both the number of steps required for convergence and the cost in orthogonalizations and restarts. Numerical results are presented to show the effectiveness of the proposed approach.", "title": "" }, { "docid": "neg:1840221_10", "text": "Accurate network traffic identification plays important roles in many areas such as traffic engineering, QoS and intrusion detection etc. The emergence of many new encrypted applications which use dynamic port numbers and masquerading techniques causes the most challenging problem in network traffic identification field. One of the challenging issues for existing traffic identification methods is that they can’t classify online encrypted traffic. To overcome the drawback of the previous identification scheme and to meet the requirements of the encrypted network activities, our work mainly focuses on how to build an online Internet traffic identification based on flow information. We propose real-time encrypted traffic identification based on flow statistical characteristics using machine learning in this paper. We evaluate the effectiveness of our proposed method through the experiments on different real traffic traces. By experiment results and analysis, this method can classify online encrypted network traffic with high accuracy and robustness.", "title": "" }, { "docid": "neg:1840221_11", "text": "Human brain imaging studies have shown that greater amygdala activation to emotional relative to neutral events leads to enhanced episodic memory. Other studies have shown that fearful faces also elicit greater amygdala activation relative to neutral faces. To the extent that amygdala recruitment is sufficient to enhance recollection, these separate lines of evidence predict that recognition memory should be greater for fearful relative to neutral faces. Experiment 1 demonstrated enhanced memory for emotionally negative relative to neutral scenes; however, fearful faces were not subject to enhanced recognition across a variety of delays (15 min to 2 wk). Experiment 2 demonstrated that enhanced delayed recognition for emotional scenes was associated with increased sympathetic autonomic arousal, indexed by the galvanic skin response, relative to fearful faces. These results suggest that while amygdala activation may be necessary, it alone is insufficient to enhance episodic memory formation. It is proposed that a sufficient level of systemic arousal is required to alter memory consolidation resulting in enhanced recollection of emotional events.", "title": "" }, { "docid": "neg:1840221_12", "text": "Bipedal robots are currently either slow, energetically inefficient and/or require a lot of control to maintain their stability. This paper introduces the FastRunner, a bipedal robot based on a new leg architecture. Simulation results of a Planar FastRunner demonstrate that legged robots can run fast, be energy efficient and inherently stable. The simulated FastRunner has a cost of transport of 1.4 and requires only a local feedback of the hip position to reach 35.4 kph from stop in simulation.", "title": "" }, { "docid": "neg:1840221_13", "text": "Many popular problems in robotics and computer vision including various types of simultaneous localization and mapping (SLAM) or bundle adjustment (BA) can be phrased as least squares optimization of an error function that can be represented by a graph. This paper describes the general structure of such problems and presents g2o, an open-source C++ framework for optimizing graph-based nonlinear error functions. Our system has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA. We provide evaluations on a wide range of real-world and simulated datasets. The results demonstrate that while being general g2o offers a performance comparable to implementations of state-of-the-art approaches for the specific problems.", "title": "" }, { "docid": "neg:1840221_14", "text": "We formulate the problem of 3D human pose estimation and tracking as one of inference in a graphical model. Unlike traditional kinematic tree representations, our model of the body is a collection of loosely-connected body-parts. In particular, we model the body using an undirected graphical model in which nodes correspond to parts and edges to kinematic, penetration, and temporal constraints imposed by the joints and the world. These constraints are encoded using pair-wise statistical distributions, that are learned from motion-capture training data. Human pose and motion estimation is formulated as inference in this graphical model and is solved using Particle Message Passing (PaMPas). PaMPas is a form of non-parametric belief propagation that uses a variation of particle filtering that can be applied over a general graphical model with loops. The loose-limbed model and decentralized graph structure allow us to incorporate information from “bottom-up” visual cues, such as limb and head detectors, into the inference process. These detectors enable automatic initialization and aid recovery from transient tracking failures. We illustrate the method by automatically tracking people in multi-view imagery using a set of calibrated cameras and present quantitative evaluation using the HumanEva dataset.", "title": "" }, { "docid": "neg:1840221_15", "text": "Research in biomaterials and biomechanics has fueled a large part of the significant revolution associated with osseointegrated implants. Additional key areas that may become even more important--such as guided tissue regeneration, growth factors, and tissue engineering--could not be included in this review because of space limitations. All of this work will no doubt continue unabated; indeed, it is probably even accelerating as more clinical applications are found for implant technology and related therapies. An excellent overall summary of oral biology and dental implants recently appeared in a dedicated issue of Advances in Dental Research. Many advances have been made in the understanding of events at the interface between bone and implants and in developing methods for controlling these events. However, several important questions still remain. What is the relationship between tissue structure, matrix composition, and biomechanical properties of the interface? Do surface modifications alter the interfacial tissue structure and composition and the rate at which it forms? If surface modifications change the initial interface structure and composition, are these changes retained? Do surface modifications enhance biomechanical properties of the interface? As current understanding of the bone-implant interface progresses, so will development of proactive implants that can help promote desired outcomes. However, in the midst of the excitement born out of this activity, it is necessary to remember that the needs of the patient must remain paramount. It is also worth noting another as-yet unsatisfied need. With all of the new developments, continuing education of clinicians in the expert use of all of these research advances is needed. For example, in the area of biomechanical treatment planning, there are still no well-accepted biomaterials/biomechanics \"building codes\" that can be passed on to clinicians. Also, there are no readily available treatment-planning tools that clinicians can use to explore \"what-if\" scenarios and other design calculations of the sort done in modern engineering. No doubt such approaches could be developed based on materials already in the literature, but unfortunately much of what is done now by clinicians remains empirical. A worthwhile task for the future is to find ways to more effectively deliver products of research into the hands of clinicians.", "title": "" }, { "docid": "neg:1840221_16", "text": "The presence of third-party tracking on websites has become customary. However, our understanding of the thirdparty ecosystem is still very rudimentary. We examine thirdparty trackers from a geographical perspective, observing the third-party tracking ecosystem from 29 countries across the globe. When examining the data by region (North America, South America, Europe, East Asia, Middle East, and Oceania), we observe significant geographical variation between regions and countries within regions. We find trackers that focus on specific regions and countries, and some that are hosted in countries outside their expected target tracking domain. Given the differences in regulatory regimes between jurisdictions, we believe this analysis sheds light on the geographical properties of this ecosystem and on the problems that these may pose to our ability to track and manage the different data silos that now store personal data about us all.", "title": "" }, { "docid": "neg:1840221_17", "text": "Computer-generated texts, whether from Natural Language Generation (NLG) or Machine Translation (MT) systems, are often post-edited by humans before being released to users. The frequency and type of post-edits is a measure of how well the system works, and can be used for evaluation. We describe how we have used post-edit data to evaluate SUMTIME-MOUSAM, an NLG system that produces weather forecasts.", "title": "" }, { "docid": "neg:1840221_18", "text": "In this review, we collate information about ticks identified in different parts of the Sudan and South Sudan since 1956 in order to identify gaps in tick prevalence and create a map of tick distribution. This will avail basic data for further research on ticks and policies for the control of tick-borne diseases. In this review, we discuss the situation in the Republic of South Sudan as well as Sudan. For this purpose we have divided Sudan into four regions, namely northern Sudan (Northern and River Nile states), central Sudan (Khartoum, Gazera, White Nile, Blue Nile and Sennar states), western Sudan (North and South Kordofan and North, South and West Darfour states) and eastern Sudan (Red Sea, Kassala and Gadarif states).", "title": "" }, { "docid": "neg:1840221_19", "text": "We present an analysis of the population dynamics and demographics of Amazon Mechanical Turk workers based on the results of the survey that we conducted over a period of 28 months, with more than 85K responses from 40K unique participants. The demographics survey is ongoing (as of November 2017), and the results are available at http://demographics.mturk-tracker.com: we provide an API for researchers to download the survey data. We use techniques from the field of ecology, in particular, the capture-recapture technique, to understand the size and dynamics of the underlying population. We also demonstrate how to model and account for the inherent selection biases in such surveys. Our results indicate that there are more than 100K workers available in Amazon»s crowdsourcing platform, the participation of the workers in the platform follows a heavy-tailed distribution, and at any given time there are more than 2K active workers. We also show that the half-life of a worker on the platform is around 12-18 months and that the rate of arrival of new workers balances the rate of departures, keeping the overall worker population relatively stable. Finally, we demonstrate how we can estimate the biases of different demographics to participate in the survey tasks, and show how to correct such biases. Our methodology is generic and can be applied to any platform where we are interested in understanding the dynamics and demographics of the underlying user population.", "title": "" } ]
1840222
Are blockchains immune to all malicious attacks ?
[ { "docid": "pos:1840222_0", "text": "User-generated online reviews can play a significant role in the success of retail products, hotels, restaurants, etc. However, review systems are often targeted by opinion spammers who seek to distort the perceived quality of a product by creating fraudulent reviews. We propose a fast and effective framework, FRAUDEAGLE, for spotting fraudsters and fake reviews in online review datasets. Our method has several advantages: (1) it exploits the network effect among reviewers and products, unlike the vast majority of existing methods that focus on review text or behavioral analysis, (2) it consists of two complementary steps; scoring users and reviews for fraud detection, and grouping for visualization and sensemaking, (3) it operates in a completely unsupervised fashion requiring no labeled data, while still incorporating side information if available, and (4) it is scalable to large datasets as its run time grows linearly with network size. We demonstrate the effectiveness of our framework on synthetic and real datasets; where FRAUDEAGLE successfully reveals fraud-bots in a large online app review database. Introduction The Web has greatly enhanced the way people perform certain activities (e.g. shopping), find information, and interact with others. Today many people read/write reviews on merchant sites, blogs, forums, and social media before/after they purchase products or services. Examples include restaurant reviews on Yelp, product reviews on Amazon, hotel reviews on TripAdvisor, and many others. Such user-generated content contains rich information about user experiences and opinions, which allow future potential customers to make better decisions about spending their money, and also help merchants improve their products, services, and marketing. Since online reviews can directly influence customer purchase decisions, they are crucial to the success of businesses. While positive reviews with high ratings can yield financial gains, negative reviews can damage reputation and cause monetary loss. This effect is magnified as the information spreads through the Web (Hitlin 2003; Mendoza, Poblete, and Castillo 2010). As a result, online review systems are attractive targets for opinion fraud. Opinion fraud involves reviewers (often paid) writing bogus reviews (Kost May 2012; Copyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Streitfeld August 2011). These spam reviews come in two flavors: defaming-spam which untruthfully vilifies, or hypespam that deceitfully promotes the target product. The opinion fraud detection problem is to spot the fake reviews in online sites, given all the reviews on the site, and for each review, its text, its author, the product it was written for, timestamp of posting, and its star-rating. Typically no user profile information is available (or is self-declared and cannot be trusted), while more side information for products (e.g. price, brand), and for reviews (e.g. number of (helpful) feedbacks) could be available depending on the site. Detecting opinion fraud, as defined above, is a non-trivial and challenging problem. Fake reviews are often written by experienced professionals who are paid to write high quality, believable reviews. As a result, it is difficult for an average potential customer to differentiate bogus reviews from truthful ones, just by looking at individual reviews text(Ott et al. 2011). As such, manual labeling of reviews is hard and ground truth information is often unavailable, which makes training supervised models less attractive for this problem. Summary of previous work. Previous attempts at solving the problem use several heuristics, such as duplicated reviews (Jindal and Liu 2008), or acquire bogus reviews from non-experts (Ott et al. 2011), to generate pseudo-ground truth, or a reference dataset. This data is then used for learning classification models together with carefully engineered features. One downside of such techniques is that they do not generalize: one needs to collect new data and train a new model for review data from a different domain, e.g., hotel vs. restaurant reviews. Moreover feature selection becomes a tedious sub-problem, as datasets from different domains might exhibit different characteristics. Other feature-based proposals include (Lim et al. 2010; Mukherjee, Liu, and Glance 2012). A large body of work on fraud detection relies on review text information (Jindal and Liu 2008; Ott et al. 2011; Feng, Banerjee, and Choi 2012) or behavioral evidence (Lim et al. 2010; Xie et al. 2012; Feng et al. 2012), and ignore the connectivity structure of review data. On the other hand, the network of reviewers and products contains rich information that implicitly represents correlations among these entities. The review network is also invaluable for detecting teams of fraudsters that operate collaboratively on targeted products. Our contributions. In this work we propose an unsuperProceedings of the Seventh International AAAI Conference on Weblogs and Social Media", "title": "" } ]
[ { "docid": "neg:1840222_0", "text": "Based on information-theoretic security, physical-layer security in an optical code division multiple access (OCDMA) system is analyzed. For the first time, the security leakage factor is employed to evaluate the physicallayer security level, and the safe receiving distance is used to measure the security transmission range. By establishing the wiretap channel model of a coherent OCDMA system, the influences of the extraction location, the extraction ratio, the number of active users, and the length of the code on the physical-layer security are analyzed quantitatively. The physical-layer security and reliability of the coherent OCDMA system are evaluated under the premise of satisfying the legitimate user’s bit error rate and the security leakage factor. The simulation results show that the number of users must be in a certain interval in order to meet the legitimate user’s reliability and security. Furthermore, the security performance of the coherent OCDMA-based wiretap channel can be improved by increasing the code length.", "title": "" }, { "docid": "neg:1840222_1", "text": "This paper presents a current-mode control non-inverting buck-boost converter. The proposed circuit is controlled by the current mode and operated in three operation modes which are buck, buck-boost, and boost mode. The operation mode is automatically determined by the ratio between the input and output voltages. The proposed circuit is simulated by HSPICE with 0.5 um standard CMOS parameters. Its input voltage range is 2.5–5 V, and the output voltage range is 1.5–5 V. The maximum efficiency is 92% when it operates in buck mode.", "title": "" }, { "docid": "neg:1840222_2", "text": "A model for evaluating the lifetime of closed electrical contacts was described. The model is based on the diffusion of oxidizing agent into the contact zone. The results showed that there is good agreement between the experimental data as reported by different authors in the literature and the derived expressions for the limiting value of contact lifetime.", "title": "" }, { "docid": "neg:1840222_3", "text": "In this paper, we investigate an angle of arrival (AoA) and angle of departure (AoD) estimation algorithm for sparse millimeter wave multiple-input multiple-output (MIMO) channels. The analytical channel model whose use we advocate here is the beam space (or virtual) MIMO channel representation. By leveraging the beam space MIMO concept, we characterize probabilistic channel priors under an analog precoding and combining constraints. This investigation motivates Bayesian inference approaches to virtual AoA and AoD estimation. We divide the estimation task into downlink sounding for AoA estimation and uplink sounding for AoD estimation. A belief propagation (BP)-type algorithm is adopted, leading to computationally efficient approximate message passing (AMP) and approximate log-likelihood ratio testing (ALLRT) algorithms. Numerical results demonstrate that the proposed algorithm outperforms the conventional AMP in terms of the AoA and AoD estimation accuracy for the sparse millimeter wave MIMO channel.", "title": "" }, { "docid": "neg:1840222_4", "text": "An otolaryngology phenol applicator kit can be successfully and safely used in the performance of chemical matricectomy. The applicator kit provides a convenient way to apply phenol to the nail matrix precisely and efficiently, whereas minimizing both the risk of application to nonmatrix surrounding soft tissue and postoperative recovery time.Given the smaller size of the foam-tipped applicator, we feel that this is a more precise tool than traditional cotton-tipped applicators for chemical matricectomy. Particularly with regard to lower extremity nail ablation and matricectomy, minimizing soft tissue inflammation could in turn reduce the risk of postoperative infections, decrease recovery time, as well and make for a more positive overall patient experience.", "title": "" }, { "docid": "neg:1840222_5", "text": "Today, not only Internet companies such as Google, Facebook or Twitter do have Big Data but also Enterprise Information Systems store an ever growing amount of data (called Big Enterprise Data in this paper). In a classical SAP system landscape a central data warehouse (SAP BW) is used to integrate and analyze all enterprise data. In SAP BW most of the business logic required for complex analytical tasks (e.g., a complex currency conversion) is implemented in the application layer on top of a standard relational database. While being independent from the underlying database when using such an architecture, this architecture has two major drawbacks when analyzing Big Enterprise Data: (1) algorithms in ABAP do not scale with the amount of data and (2) data shipping is required. To this end, we present a novel programming language called SQLScript to efficiently support complex and scalable analytical tasks inside SAP’s new main-memory database HANA. SQLScript provides two major extensions to the SQL dialect of SAP HANA: A functional and a procedural extension. While the functional extension allows the definition of scalable analytical tasks on Big Enterprise Data, the procedural extension provides imperative constructs to orchestrate the analytical tasks. The major contributions of this paper are two novel functional extensions: First, an extended version of the MapReduce programming model for supporting parallelizable user-defined functions (UDFs). Second, compared to recursion in the SQL standard, a generalized version of recursion to support graph analytics as well as machine learning tasks.", "title": "" }, { "docid": "neg:1840222_6", "text": "As a promising way for heterogeneous data analytics, consensus clustering has attracted increasing attention in recent decades. Among various excellent solutions, the co-association matrix based methods form a landmark, which redefines consensus clustering as a graph partition problem. Nevertheless, the relatively high time and space complexities preclude it from wide real-life applications. We, therefore, propose Spectral Ensemble Clustering (SEC) to leverage the advantages of co-association matrix in information integration but run more efficiently. We disclose the theoretical equivalence between SEC and weighted K-means clustering, which dramatically reduces the algorithmic complexity. We also derive the latent consensus function of SEC, which to our best knowledge is the first to bridge co-association matrix based methods to the methods with explicit global objective functions. Further, we prove in theory that SEC holds the robustness, generalizability, and convergence properties. We finally extend SEC to meet the challenge arising from incomplete basic partitions, based on which a row-segmentation scheme for big data clustering is proposed. Experiments on various real-world data sets in both ensemble and multi-view clustering scenarios demonstrate the superiority of SEC to some state-of-the-art methods. In particular, SEC seems to be a promising candidate for big data clustering.", "title": "" }, { "docid": "neg:1840222_7", "text": "In this paper athree-stage pulse generator architecture capable of generating high voltage, high current pulses is reported and system issues are presented. Design choices and system dynamics are explained both qualitatively and quantitatively with discussion sections followed by the presentation of closed-form expressions and numerical analysis that provide insight into the system's operation. Analysis targeted at optimizing performance focuses on diode opening switch pumping, energy efficiency, and compensation of parasitic reactances. A compact system based on these design guidelines has been built to output 8 kV, 5 ns pulses into 50 Ω. Output risetimes below 1 ns have been achieved using two different compression techniques. At only 1.5 kg, this light and compact system shows promise for a variety of pulsed power applications requiring the long lifetime, low jitter performance of a solid state pulse generator that can produce fast, high voltage pulses at high repetition rates.", "title": "" }, { "docid": "neg:1840222_8", "text": "Given a single column of values, existing approaches typically employ regex-like rules to detect errors by finding anomalous values inconsistent with others. Such techniques make local decisions based only on values in the given input column, without considering a more global notion of compatibility that can be inferred from large corpora of clean tables. We propose \\sj, a statistics-based technique that leverages co-occurrence statistics from large corpora for error detection, which is a significant departure from existing rule-based methods. Our approach can automatically detect incompatible values, by leveraging an ensemble of judiciously selected generalization languages, each of which uses different generalizations and is sensitive to different types of errors. Errors so detected are based on global statistics, which is robust and aligns well with human intuition of errors. We test \\sj on a large set of public Wikipedia tables, as well as proprietary enterprise Excel files. While both of these test sets are supposed to be of high-quality, \\sj makes surprising discoveries of over tens of thousands of errors in both cases, which are manually verified to be of high precision (over 0.98). Our labeled benchmark set on Wikipedia tables is released for future research.", "title": "" }, { "docid": "neg:1840222_9", "text": "In present-day high-performance electronic components, the generated heat loads result in unacceptably high junction temperatures and reduced component lifetimes. Thermoelectric modules can, in principle, enhance heat removal and reduce the temperatures of such electronic devices. However, state-of-the-art bulk thermoelectric modules have a maximum cooling flux qmax of only about 10 W cm(-2), while state-of-the art commercial thin-film modules have a qmax <100 W cm(-2). Such flux values are insufficient for thermal management of modern high-power devices. Here we show that cooling fluxes of 258 W cm(-2) can be achieved in thin-film Bi2Te3-based superlattice thermoelectric modules. These devices utilize a p-type Sb2Te3/Bi2Te3 superlattice and n-type δ-doped Bi2Te3-xSex, both of which are grown heteroepitaxially using metalorganic chemical vapour deposition. We anticipate that the demonstration of these high-cooling-flux modules will have far-reaching impacts in diverse applications, such as advanced computer processors, radio-frequency power devices, quantum cascade lasers and DNA micro-arrays.", "title": "" }, { "docid": "neg:1840222_10", "text": "Lewis G. Halsey is in the Department of Life Sciences, University of Roehampton, London, UK; Douglas Curran-Everett is in the Division of Biostatistics and Bioinformatics, National Jewish Health, and the Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Denver, Denver, Colorado, USA; Sarah L. Vowler is at Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge, UK; and Gordon B. Drummond is at the University of Edinburgh, Edinburgh, UK. e-mail: l.halsey@roehampton.ac.uk The fickle P value generates irreproducible results", "title": "" }, { "docid": "neg:1840222_11", "text": "Instance-level object segmentation is an important yet under-explored task. Most of state-of-the-art methods rely on region proposal methods to extract candidate segments and then utilize object classification to produce final results. Nonetheless, generating reliable region proposals itself is a quite challenging and unsolved task. In this work, we propose a Proposal-Free Network (PFN) to address the instance-level object segmentation problem, which outputs the numbers of instances of different categories and the pixel-level information on i) the coordinates of the instance bounding box each pixel belongs to, and ii) the confidences of different categories for each pixel, based on pixel-to-pixel deep convolutional neural network. All the outputs together, by using any off-the-shelf clustering method for simple post-processing, can naturally generate the ultimate instance-level object segmentation results. The whole PFN can be easily trained without the requirement of a proposal generation stage. Extensive evaluations on the challenging PASCAL VOC 2012 semantic segmentation benchmark demonstrate the effectiveness of the proposed PFN solution without relying on any proposal generation methods.", "title": "" }, { "docid": "neg:1840222_12", "text": "Information security policy compliance is one of the key concerns that face organizations today. Although, technical and procedural security measures help improve information security, there is an increased need to accommodate human, social and organizational factors. While employees are considered the weakest link in information security domain, they also are assets that organizations need to leverage effectively. Employees' compliance with Information Security Policies (ISPs) is critical to the success of an information security program. The purpose of this research is to develop a measurement tool that provides better measures for predicting and explaining employees' compliance with ISPs by examining the role of information security awareness in enhancing employees' compliance with ISPs. The study is the first to address compliance intention from a users' perspective. Overall, analysis results indicate strong support for the proposed instrument and represent an early confirmation for the validation of the underlying theoretical model.", "title": "" }, { "docid": "neg:1840222_13", "text": "We describe and compare several methods for generating game character controllers that mimic the playing style of a particular human player, or of a population of human players, across video game levels. Similarity in playing style is measured through an evaluation framework, that compares the play trace of one or several human players with the punctuated play trace of an AI player. The methods that are compared are either hand-coded, direct (based on supervised learning) or indirect (based on maximising a similarity measure). We find that a method based on neuroevolution performs best both in terms of the instrumental similarity measure and in phenomenological evaluation by human spectators. A version of the classic platform game “Super Mario Bros” is used as the testbed game in this study but the methods are applicable to other games that are based on character movement in space.", "title": "" }, { "docid": "neg:1840222_14", "text": "In this work we present the design of a digitally controlled ring type oscillator in 0.5 μm CMOS technology for a low-cost and portable radio-frequency diathermy (RFD) device. The oscillator circuit is composed by a low frequency ring oscillator (LFRO), a voltage controlled ring oscillator (VCRO), and a logic control. The digital circuit generates an input signal for the LFO, which generates a voltage ramp that controls the oscillating output signal of the VCRO in the range of 500 KHz to 1 MHz. Simulation results show that the proposed circuit exhibits controllable output characteristics in the range of 500 KHz–1 MHz, with low power consumption and low phase noise, making it suitable for a portable RFD device.", "title": "" }, { "docid": "neg:1840222_15", "text": "The growing prevalence of e-learning systems and on-line courses has made educational material widely accessible to students of varying abilities, backgrounds and styles. There is thus a growing need to accomodate for individual differences in such e-learning systems. This paper presents a new algorithm for personliazing educational content to students that combines collaborative filtering algorithms with social choice theory. The algorithm constructs a “difficulty” ranking over questions for a target student by aggregating the ranking of similar students, as measured by different aspects of their performance on common past questions, such as grades, number of retries, and time spent solving questions. It infers a difficulty ranking directly over the questions for a target student, rather than ordering them according to predicted performance, which is prone to error. The algorithm was tested on two large real world data sets containing tens of thousands of students and a million records. Its performance was compared to a variety of personalization methods as well as a non-personalized method that relied on a domain expert. It was able to significantly outperform all of these approaches according to standard information retrieval metrics. Our approach can potentially be used to support teachers in tailoring problem sets and exams to individual students and students in informing them about areas they may need to strengthen.", "title": "" }, { "docid": "neg:1840222_16", "text": "To acquire accurate, real-time hyperspectral images with high spatial resolution, we develop two types of low-cost, lightweight Whisk broom hyperspectral sensors that can be loaded onto lightweight unmanned autonomous vehicle (UAV) platforms. A system is composed of two Mini-Spectrometers, a polygon mirror, references for sensor calibration, a GPS sensor, a data logger and a power supply. The acquisition of images with high spatial resolution is realized by a ground scanning along a direction perpendicular to the flight direction based on the polygon mirror. To cope with the unstable illumination condition caused by the low-altitude observation, skylight radiation and dark current are acquired in real-time by the scanning structure. Another system is composed of 2D optical fiber array connected to eight Mini-Spectrometers and a telephoto lens, a convex lens, a micro mirror, a GPS sensor, a data logger and a power supply. The acquisition of images is realized by a ground scanning based on the rotation of the micro mirror.", "title": "" }, { "docid": "neg:1840222_17", "text": "ID3 algorithm was a classic classification of data mining. It always selected the attribute with many values. The attribute with many values wasn't the correct one, and it always created wrong classification. In the application of intrusion detection system, it would created fault alarm and omission alarm. To this fault, an improved decision tree algorithm was proposed. Though improvement of information gain formula, the correct attribute would be got. The decision tree was created after the data collected classified correctly. The tree would be not high and has a few of branches. The rule set would be got based on the decision tree. Experimental results showed the effectiveness of the algorithm, false alarm rate and omission rate decreased, increasing the detection rate and reducing the space consumption.", "title": "" }, { "docid": "neg:1840222_18", "text": "In this paper, we propose a new halftoning scheme that preserves the structure and tone similarities of images while maintaining the simplicity of Floyd-Steinberg error diffusion. Our algorithm is based on the Floyd-Steinberg error diffusion algorithm, but the threshold modulation part is modified to improve the over-blurring issue of the Floyd-Steinberg error diffusion algorithm. By adding some structural information on images obtained using the Laplacian operator to the quantizer thresholds, the structural details in the textured region can be preserved. The visual artifacts of the original error diffusion that is usually visible in the uniform region is greatly reduced by adding noise to the thresholds. This is especially true for the low contrast region because most existing error diffusion algorithms cannot preserve structural details but our algorithm preserves them clearly using threshold modulation. Our algorithm has been evaluated using various types of images including some with the low contrast region and assessed numerically using the MSSIM measure with other existing state-of-art halftoning algorithms. The results show that our method performs better than existing approaches both in the textured region and in the uniform region with the faster computation speed.", "title": "" }, { "docid": "neg:1840222_19", "text": "In this work, classification of cellular structures in the high resolutional histopathological images and the discrimination of cellular and non-cellular structures have been investigated. The cell classification is a very exhaustive and time-consuming process for pathologists in medicine. The development of digital imaging in histopathology has enabled the generation of reasonable and effective solutions to this problem. Morever, the classification of digital data provides easier analysis of cell structures in histopathological data. Convolutional neural network (CNN), constituting the main theme of this study, has been proposed with different spatial window sizes in RGB color spaces. Hence, to improve the accuracies of classification results obtained by supervised learning methods, spatial information must also be considered. So, spatial dependencies of cell and non-cell pixels can be evaluated within different pixel neighborhoods in this study. In the experiments, the CNN performs superior than other pixel classification methods including SVM and k-Nearest Neighbour (k-NN). At the end of this paper, several possible directions for future research are also proposed.", "title": "" } ]
1840223
Comparison between security majors in virtual machine and linux containers
[ { "docid": "pos:1840223_0", "text": "A virtualized system includes a new layer of software, the virtual machine monitor. The VMM's principal role is to arbitrate accesses to the underlying physical host platform's resources so that multiple operating systems (which are guests of the VMM) can share them. The VMM presents to each guest OS a set of virtual platform interfaces that constitute a virtual machine (VM). Once confined to specialized, proprietary, high-end server and mainframe systems, virtualization is now becoming more broadly available and is supported in off-the-shelf systems based on Intel architecture (IA) hardware. This development is due in part to the steady performance improvements of IA-based systems, which mitigates traditional virtualization performance overheads. Intel virtualization technology provides hardware support for processor virtualization, enabling simplifications of virtual machine monitor software. Resulting VMMs can support a wider range of legacy and future operating systems while maintaining high performance.", "title": "" } ]
[ { "docid": "neg:1840223_0", "text": "Building Information Modeling (BIM) provides an integrated 3D environment to manage large-scale engineering projects. The Architecture, Engineering and Construction (AEC) industry explores 4D visualizations over these datasets for virtual construction planning. However, existing solutions lack adequate visual mechanisms to inspect the underlying schedule and make inconsistencies readily apparent. The goal of this paper is to apply best practices of information visualization to improve 4D analysis of construction plans. We first present a review of previous work that identifies common use cases and limitations. We then consulted with AEC professionals to specify the main design requirements for such applications. These guided the development of CasCADe, a novel 4D visualization system where task sequencing and spatio-temporal simultaneity are immediately apparent. This unique framework enables the combination of diverse analytical features to create an information-rich analysis environment. We also describe how engineering collaborators used CasCADe to review the real-world construction plans of an Oil & Gas process plant. The system made evident schedule uncertainties, identified work-space conflicts and helped analyze other constructability issues. The results and contributions of this paper suggest new avenues for future research in information visualization for the AEC industry.", "title": "" }, { "docid": "neg:1840223_1", "text": "Tooth abutments can be prepared to receive fixed dental prostheses with different types of finish lines. The literature reports different complications arising from tooth preparation techniques, including gingival recession. Vertical preparation without a finish line is a technique whereby the abutments are prepared by introducing a diamond rotary instrument into the sulcus to eliminate the cementoenamel junction and to create a new prosthetic cementoenamel junction determined by the prosthetic margin. This article describes 2 patients whose dental abutments were prepared to receive ceramic restorations using vertical preparation without a finish line.", "title": "" }, { "docid": "neg:1840223_2", "text": "The previous chapters gave an insightful introduction into the various facets of Business Process Management. We now share a rich understanding of the essential ideas behind designing and managing processes for organizational purposes. We have also learned about the various streams of research and development that have influenced contemporary BPM. As a matter of fact, BPM has become a holistic management discipline. As such, it requires that a plethora of facets needs to be addressed for its successful und sustainable application. This chapter provides a framework that consolidates and structures the essential factors that constitute BPM as a whole. Drawing from research in the field of maturity models, we suggest six core elements of BPM: strategic alignment, governance, methods, information technology, people, and culture. These six elements serve as the structure for this BPM Handbook. 1 Why Looking for BPM Core Elements? A recent global study by Gartner confirmed the significance of BPM with the top issue for CIOs identified for the sixth year in a row being the improvement of business processes (Gartner 2010). While such an interest in BPM is beneficial for professionals in this field, it also increases the expectations and the pressure to deliver on the promises of the process-centered organization. This context demands a sound understanding of how to approach BPM and a framework that decomposes the complexity of a holistic approach such as Business Process Management. A framework highlighting essential building blocks of BPM can particularly serve the following purposes: M. Rosemann (*) Information Systems Discipline, Faculty of Science and Technology, Queensland University of Technology, Brisbane, Australia e-mail: m.rosemann@qut.edu.au J. vom Brocke and M. Rosemann (eds.), Handbook on Business Process Management 1, International Handbooks on Information Systems, DOI 10.1007/978-3-642-00416-2_5, # Springer-Verlag Berlin Heidelberg 2010 107 l Project and Program Management: How can all relevant issues within a BPM approach be safeguarded? When implementing a BPM initiative, either as a project or as a program, is it essential to individually adjust the scope and have different BPM flavors in different areas of the organization? What competencies are relevant? What approach fits best with the culture and BPM history of the organization? What is it that needs to be taken into account “beyond modeling”? People for one thing play an important role like Hammer has pointed out in his chapter (Hammer 2010), but what might be further elements of relevance? In order to find answers to these questions, a framework articulating the core elements of BPM provides invaluable advice. l Vendor Management: How can service and product offerings in the field of BPM be evaluated in terms of their overall contribution to successful BPM? What portfolio of solutions is required to address the key issues of BPM, and to what extent do these solutions need to be sourced from outside the organization? There is, for example, a large list of providers of process-aware information systems, change experts, BPM training providers, and a variety of BPM consulting services. How can it be guaranteed that these offerings cover the required capabilities? In fact, the vast number of BPM offerings does not meet the requirements as distilled in this Handbook; see for example, Hammer (2010), Davenport (2010), Harmon (2010), and Rummler and Ramias (2010). It is also for the purpose of BPM make-or-buy decisions and the overall vendor management, that a framework structuring core elements of BPM is highly needed. l Complexity Management: How can the complexity that results from the holistic and comprehensive nature of BPM be decomposed so that it becomes manageable? How can a number of coexisting BPM initiatives within one organization be synchronized? An overarching picture of BPM is needed in order to provide orientation for these initiatives. Following a “divide-and-conquer” approach, a shared understanding of the core elements can help to focus on special factors of BPM. For each element, a specific analysis could be carried out involving experts from the various fields. Such an assessment should be conducted by experts with the required technical, business-oriented, and socio-cultural know-how. l Standards Management: What elements of BPM need to be standardized across the organization? What BPM elements need to be mandated for every BPM initiative? What BPM elements can be configured individually within each initiative? A comprehensive framework allows an element-by-element decision for the degrees of standardization that are required. For example, it might be decided that a company-wide process model repository will be “enforced” on all BPM initiatives, while performance management and cultural change will be decentralized activities. l Strategy Management: What is the BPM strategy of the organization? How does this strategy materialize in a BPM roadmap? How will the naturally limited attention of all involved stakeholders be distributed across the various BPM elements? How do we measure progression in a BPM initiative (“BPM audit”)? 108 M. Rosemann and J. vom Brocke", "title": "" }, { "docid": "neg:1840223_3", "text": "We investigate a novel and important application domain for deep RL: network routing. The question of whether/when traditional network protocol design, which relies on the application of algorithmic insights by human experts, can be replaced by a data-driven approach has received much attention recently. We explore this question in the context of the, arguably, most fundamental networking task: routing. Can ideas and techniques from machine learning be leveraged to automatically generate “good” routing configurations? We observe that the routing domain poses significant challenges for data-driven network protocol design and report on preliminary results regarding the power of data-driven routing. Our results suggest that applying deep reinforcement learning to this context yields high performance and is thus a promising direction for further research. We outline a research agenda for data-driven routing.", "title": "" }, { "docid": "neg:1840223_4", "text": "This paper proposes a carrier-based pulse width modulation (CB-PWM) method with synchronous switching technique for a Vienna rectifier. In this paper, a Vienna rectifier is one of the 3-level converter topologies. It is similar to a 3-level T-type topology the used back-to-back switches. When CB-PWM switching method is used, a Vienna rectifier is operated with six PWM signals. On the other hand, when the back-to-back switches are synchronized, PWM signals can be reduced to three from six. However, the synchronous switching method has a problem that the current distortion around zero-crossing point is worse than one of the conventional CB-PWM switching method. To improve current distortions, this paper proposes a reactive current injection technique. The performance and effectiveness of the proposed synchronous switching method are verified by simulation with a 5-kW Vienna rectifier.", "title": "" }, { "docid": "neg:1840223_5", "text": "In this paper, we discuss our ongoing efforts to construct a scientific paper browsing system that helps users to read and understand advanced technical content distributed in PDF. Since PDF is a format specifically designed for printing, layout and logical structures of documents are indistinguishably embedded in the file. It requires much effort to extract natural language text from PDF files, and reversely, display semantic annotations produced by NLP tools on the original page layout. In our browsing system, we tackle these issues caused by the gap between printable document and plain text. Our system provides ways to extract natural language sentences from PDF files together with their logical structures, and also to map arbitrary textual spans to their corresponding regions on page images. We setup a demonstration system using papers published in ACL anthology and demonstrate the enhanced search and refined recommendation functions which we plan to make widely available to NLP researchers.", "title": "" }, { "docid": "neg:1840223_6", "text": "9 Background. Office of Academic Affairs (OAA), Office of Student Life (OSL) and Information Technology Helpdesk (ITD) are support functions within a university which receives hundreds of email messages on the daily basis. A large percentage of emails received by these departments are frequent and commonly used queries or request for information. Responding to every query by manually typing is a tedious and time consuming task and an automated approach for email response suggestion can save lot of time. 10", "title": "" }, { "docid": "neg:1840223_7", "text": "The notion of a <italic>program slice</italic>, originally introduced by Mark Weiser, is useful in program debugging, automatic parallelization, and program integration. A slice of a program is taken with respect to a program point <italic>p</italic> and a variable <italic>x</italic>; the slice consists of all statements of the program that might affect the value of <italic>x</italic> at point <italic>p</italic>. This paper concerns the problem of interprocedural slicing—generating a slice of an entire program, where the slice crosses the boundaries of procedure calls. To solve this problem, we introduce a new kind of graph to represent programs, called a <italic>system dependence graph</italic>, which extends previous dependence representations to incorporate collections of procedures (with procedure calls) rather than just monolithic programs. Our main result is an algorithm for interprocedural slicing that uses the new representation. (It should be noted that our work concerns a somewhat restricted kind of slice: rather than permitting a program to be sliced with respect to program point <italic>p</italic> and an <italic>arbitrary</italic> variable, a slice must be taken with respect to a variable that is <italic>defined</italic> or <italic>used</italic> at <italic>p</italic>.)\nThe chief difficulty in interprocedural slicing is correctly accounting for the calling context of a called procedure. To handle this problem, system dependence graphs include some data dependence edges that represent <italic>transitive</italic> dependences due to the effects of procedure calls, in addition to the conventional direct-dependence edges. These edges are constructed with the aid of an auxiliary structure that represents calling and parameter-linkage relationships. This structure takes the form of an attribute grammar. The step of computing the required transitive-dependence edges is reduced to the construction of the subordinate characteristic graphs for the grammar's nonterminals.", "title": "" }, { "docid": "neg:1840223_8", "text": "BACKGROUND\nDermoscopy is useful in evaluating skin tumours, but its applicability also extends into the field of inflammatory skin disorders. Discoid lupus erythematosus (DLE) represents the most common subtype of cutaneous lupus erythematosus. While dermoscopy and videodermoscopy have been shown to aid the differentiation of scalp DLE from other causes of scarring alopecia, limited data exist concerning dermoscopic criteria of DLE in other locations, such as the face, trunk and extremities.\n\n\nOBJECTIVE\nTo describe the dermoscopic criteria observed in a series of patients with DLE located on areas other than the scalp, and to correlate them to the underlying histopathological alterations.\n\n\nMETHODS\nDLE lesions located on the face, trunk and extremities were dermoscopically and histopathologically examined. Selection of the dermoscopic variables included in the evaluation process was based on data in the available literature on DLE of the scalp and on our preliminary observations. Analysis of data was done with SPSS analysis software.\n\n\nRESULTS\nFifty-five lesions from 37 patients with DLE were included in the study. Perifollicular whitish halo, follicular keratotic plugs and telangiectasias were the most common dermoscopic criteria. Statistical analysis revealed excellent correlation between dermoscopic and histopathological findings. Notably, a time-related alteration of dermoscopic features was observed.\n\n\nCONCLUSIONS\nThe present study provides new insights into the dermoscopic variability of DLE located on the face, trunk and extremities.", "title": "" }, { "docid": "neg:1840223_9", "text": "In this paper, we survey the techniques for image-based rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, image-based rendering techniques render novel views directly from input images. Previous image-based rendering techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative methods. The continuum between images and geometry used in image-based rendering techniques suggests that image-based rendering with traditional 3D graphics can be united in a joint image and geometry space.", "title": "" }, { "docid": "neg:1840223_10", "text": "Flow cytometry is a sophisticated instrument measuring multiple physical characteristics of a single cell such as size and granularity simultaneously as the cell flows in suspension through a measuring device. Its working depends on the light scattering features of the cells under investigation, which may be derived from dyes or monoclonal antibodies targeting either extracellular molecules located on the surface or intracellular molecules inside the cell. This approach makes flow cytometry a powerful tool for detailed analysis of complex populations in a short period of time. This review covers the general principles and selected applications of flow cytometry such as immunophenotyping of peripheral blood cells, analysis of apoptosis and detection of cytokines. Additionally, this report provides a basic understanding of flow cytometry technology essential for all users as well as the methods used to analyze and interpret the data. Moreover, recent progresses in flow cytometry have been discussed in order to give an opinion about the future importance of this technology.", "title": "" }, { "docid": "neg:1840223_11", "text": "This study examined the effects of a virtual reality distraction intervention on chemotherapy-related symptom distress levels in 16 women aged 50 and older. A cross-over design was used to answer the following research questions: (1) Is virtual reality an effective distraction intervention for reducing chemotherapy-related symptom distress levels in older women with breast cancer? (2) Does virtual reality have a lasting effect? Chemotherapy treatments are intensive and difficult to endure. One way to cope with chemotherapy-related symptom distress is through the use of distraction. For this study, a head-mounted display (Sony PC Glasstron PLM - S700) was used to display encompassing images and block competing stimuli during chemotherapy infusions. The Symptom Distress Scale (SDS), Revised Piper Fatigue Scale (PFS), and the State Anxiety Inventory (SAI) were used to measure symptom distress. For two matched chemotherapy treatments, one pre-test and two post-test measures were employed. Participants were randomly assigned to receive the VR distraction intervention during one chemotherapy treatment and received no distraction intervention (control condition) during an alternate chemotherapy treatment. Analysis using paired t-tests demonstrated a significant decrease in the SAI (p = 0.10) scores immediately following chemotherapy treatments when participants used VR. No significant changes were found in SDS or PFS values. There was a consistent trend toward improved symptoms on all measures 48 h following completion of chemotherapy. Evaluation of the intervention indicated that women thought the head mounted device was easy to use, they experienced no cybersickness, and 100% would use VR again.", "title": "" }, { "docid": "neg:1840223_12", "text": "The need for automated grading tools for essay writing and open-ended assignments has received increasing attention due to the unprecedented scale of Massive Online Courses (MOOCs) and the fact that more and more students are relying on computers to complete and submit their school work. In this paper, we propose an efficient memory networks-powered automated grading model. The idea of our model stems from the philosophy that with enough graded samples for each score in the rubric, such samples can be used to grade future work that is found to be similar. For each possible score in the rubric, a student response graded with the same score is collected. These selected responses represent the grading criteria specified in the rubric and are stored in the memory component. Our model learns to predict a score for an ungraded response by computing the relevance between the ungraded response and each selected response in memory. The evaluation was conducted on the Kaggle Automated Student Assessment Prize (ASAP) dataset. The results show that our model achieves state-of-the-art performance in 7 out of 8 essay sets.", "title": "" }, { "docid": "neg:1840223_13", "text": "Since a parallel structure is a closed kinematics chain, all legs are connected from the origin of the tool point by a parallel connection. This connection allows a higher precision and a higher velocity. Parallel kinematic manipulators have better performance compared to serial kinematic manipulators in terms of a high degree of accuracy, high speeds or accelerations and high stiffness. Therefore, they seem perfectly suitable for industrial high-speed applications, such as pick-and-place or micro and high-speed machining. They are used in many fields such as flight simulation systems, manufacturing and medical applications. One of the most popular parallel manipulators is the general purpose 6 degree of freedom (DOF) Stewart Platform (SP) proposed by Stewart in 1965 as a flight simulator (Stewart, 1965). It consists of a top plate (moving platform), a base plate (fixed base), and six extensible legs connecting the top plate to the bottom plate. SP employing the same architecture of the Gough mechanism (Merlet, 1999) is the most studied type of parallel manipulators. This is also known as Gough–Stewart platforms in literature. Complex kinematics and dynamics often lead to model simplifications decreasing the accuracy. In order to overcome this problem, accurate kinematic and dynamic identification is needed. The kinematic and dynamic modeling of SP is extremely complicated in comparison with serial robots. Typically, the robot kinematics can be divided into forward kinematics and inverse kinematics. For a parallel manipulator, inverse kinematics is straight forward and there is no complexity deriving the equations. However, forward kinematics of SP is very complicated and difficult to solve since it requires the solution of many non-linear equations. Moreover, the forward kinematic problem generally has more than one solution. As a result, most research papers concentrated on the forward kinematics of the parallel manipulators (Bonev and Ryu, 2000; Merlet, 2004; Harib and Srinivasan, 2003; Wang, 2007). For the design and the control of the SP manipulators, the accurate dynamic model is very essential. The dynamic modeling of parallel manipulators is quite complicated because of their closed-loop structure, coupled relationship between system parameters, high nonlinearity in system dynamics and kinematic constraints. Robot dynamic modeling can be also divided into two topics: inverse and forward dynamic model. The inverse dynamic model is important for system control while the forward model is used for system simulation. To obtain the dynamic model of parallel manipulators, there are many valuable studies published by many researches in the literature. The dynamic analysis of parallel manipulators has been traditionally performed through several different methods such as", "title": "" }, { "docid": "neg:1840223_14", "text": "The problem of consistently estimating the sparsity pattern of a vector beta* isin Rp based on observations contaminated by noise arises in various contexts, including signal denoising, sparse approximation, compressed sensing, and model selection. We analyze the behavior of l1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern. Our main result is to establish precise conditions on the problem dimension p, the number k of nonzero elements in beta*, and the number of observations n that are necessary and sufficient for sparsity pattern recovery using the Lasso. We first analyze the case of observations made using deterministic design matrices and sub-Gaussian additive noise, and provide sufficient conditions for support recovery and linfin-error bounds, as well as results showing the necessity of incoherence and bounds on the minimum value. We then turn to the case of random designs, in which each row of the design is drawn from a N (0, Sigma) ensemble. For a broad class of Gaussian ensembles satisfying mutual incoherence conditions, we compute explicit values of thresholds 0 < thetasl(Sigma) les thetasu(Sigma) < +infin with the following properties: for any delta > 0, if n > 2 (thetasu + delta) klog (p- k), then the Lasso succeeds in recovering the sparsity pattern with probability converging to one for large problems, whereas for n < 2 (thetasl - delta)klog (p - k), then the probability of successful recovery converges to zero. For the special case of the uniform Gaussian ensemble (Sigma = Iptimesp), we show that thetasl = thetas<u = 1, so that the precise threshold n = 2 klog(p- k) is exactly determined.", "title": "" }, { "docid": "neg:1840223_15", "text": "Personalized tag recommendation systems recommend a list of tags to a user when he is about to annotate an item. It exploits the individual preference and the characteristic of the items. Tensor factorization techniques have been applied to many applications, such as tag recommendation. Models based on Tucker Decomposition can achieve good performance but require a lot of computation power. On the other hand, models based on Canonical Decomposition can run in linear time and are more feasible for online recommendation. In this paper, we propose a novel method for personalized tag recommendation, which can be considered as a nonlinear extension of Canonical Decomposition. Different from linear tensor factorization, we exploit Gaussian radial basis function to increase the model’s capacity. The experimental results show that our proposed method outperforms the state-of-the-art methods for tag recommendation on real datasets and perform well even with a small number of features, which verifies that our models can make better use of features.", "title": "" }, { "docid": "neg:1840223_16", "text": "Block-based local binary patterns a.k.a. enhanced local binary patterns (ELBPs) have proven to be a highly discriminative descriptor for face recognition and image retrieval. Since this descriptor is mainly composed by histograms, little work (if any) has been done for selecting its relevant features (either the bins or the blocks). In this paper, we address feature selection for both the classic ELBP representation and the recently proposed color quaternionic LBP (QLBP). We introduce a filter method for the automatic weighting of attributes or blocks using an improved version of the margin-based iterative search Simba algorithm. This new improved version introduces two main modifications: (i) the hypothesis margin of a given instance is computed by taking into account the K-nearest neighboring examples within the same class as well as the K-nearest neighboring examples with a different label; (ii) the distances between samples and their nearest neighbors are computed using the weighted $$\\chi ^2$$ χ 2 distance instead of the Euclidean one. This algorithm has been compared favorably with several competing feature selection algorithms including the Euclidean-based Simba as well as variance and Fisher score algorithms giving higher performances. The proposed method is useful for other descriptors that are formed by histograms. Experimental results show that the QLBP descriptor allows an improvement of the accuracy in discriminating faces compared with the ELBP. They also show that the obtained selection (attributes or blocks) can either improve recognition performance or maintain it with a significant reduction in the descriptor size.", "title": "" }, { "docid": "neg:1840223_17", "text": "In the last few years the efficiency of secure multi-party computation (MPC) increased in several orders of magnitudes. However, this alone might not be enough if we want MPC protocols to be used in practice. A crucial property that is needed in many applications is that everyone can check that a given (secure) computation was performed correctly – even in the extreme case where all the parties involved in the computation are corrupted, and even if the party who wants to verify the result was not participating. This is especially relevant in the clients-servers setting, where many clients provide input to a secure computation performed by a few servers. An obvious example of this is electronic voting, but also in many types of auctions one may want independent verification of the result. Traditionally, this is achieved by using non-interactive zero-knowledge proofs during the computation. A recent trend in MPC protocols is to have a more expensive preprocessing phase followed by a very efficient online phase, e.g., the recent so-called SPDZ protocol by Damg̊ard et al. Applications such as voting and some auctions are perfect use-case for these protocols, as the parties usually know well in advance when the computation will take place, and using those protocols allows us to use only cheap information-theoretic primitives in the actual computation. Unfortunately no protocol of the SPDZ type supports an audit phase. In this paper, we show how to achieve efficient MPC with a public audit. We formalize the concept of publicly auditable secure computation and provide an enhanced version of the SPDZ protocol where, even if all the servers are corrupted, anyone with access to the transcript of the protocol can check that the output is indeed correct. Most importantly, we do so without significantly compromising the performance of SPDZ i.e. our online phase has complexity approximately twice that of SPDZ.", "title": "" }, { "docid": "neg:1840223_18", "text": "The expression and experience of human behavior are complex and multimodal and characterized by individual and contextual heterogeneity and variability. Speech and spoken language communication cues offer an important means for measuring and modeling human behavior. Observational research and practice across a variety of domains from commerce to healthcare rely on speech- and language-based informatics for crucial assessment and diagnostic information and for planning and tracking response to an intervention. In this paper, we describe some of the opportunities as well as emerging methodologies and applications of human behavioral signal processing (BSP) technology and algorithms for quantitatively understanding and modeling typical, atypical, and distressed human behavior with a specific focus on speech- and language-based communicative, affective, and social behavior. We describe the three important BSP components of acquiring behavioral data in an ecologically valid manner across laboratory to real-world settings, extracting and analyzing behavioral cues from measured data, and developing models offering predictive and decision-making support. We highlight both the foundational speech and language processing building blocks as well as the novel processing and modeling opportunities. Using examples drawn from specific real-world applications ranging from literacy assessment and autism diagnostics to psychotherapy for addiction and marital well being, we illustrate behavioral informatics applications of these signal processing techniques that contribute to quantifying higher level, often subjectively described, human behavior in a domain-sensitive fashion.", "title": "" }, { "docid": "neg:1840223_19", "text": "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "title": "" } ]
1840224
First Step Towards End-to-End Parametric TTS Synthesis: Generating Spectral Parameters with Neural Attention
[ { "docid": "pos:1840224_0", "text": "Statistical parametric speech synthesis (SPSS) using deep neural networks (DNNs) has shown its potential to produce naturally-sounding synthesized speech. However, there are limitations in the current implementation of DNN-based acoustic modeling for speech synthesis, such as the unimodal nature of its objective function and its lack of ability to predict variances. To address these limitations, this paper investigates the use of a mixture density output layer. It can estimate full probability density functions over real-valued output features conditioned on the corresponding input features. Experimental results in objective and subjective evaluations show that the use of the mixture density output layer improves the prediction accuracy of acoustic features and the naturalness of the synthesized speech.", "title": "" } ]
[ { "docid": "neg:1840224_0", "text": "A circularly polarized microstrip array antenna is proposed for Ka-band satellite applications. The antenna element consists of an L-shaped patch with parasitic circular-ring radiator. A sequentially rotated 2×2 antenna array exhibits a wideband 3-dB axial ratio bandwidth of 20.6% (25.75 GHz - 31.75 GHz) and 2;1-VSWR bandwidth of 24.0% (25.5 GHz - 32.5 GHz). A boresight gain of 10-11.8 dBic is achieved across a frequency range from 26 GHz to 32 GHz. An 8×8 antenna array exhibits a boresight gain of greater than 24 dBic over 27.25 GHz-31.25 GHz.", "title": "" }, { "docid": "neg:1840224_1", "text": "This article assumes that brands should be managed as valuable, long-term corporate assets. It is proposed that for a true brand asset mindset to be achieved, the relationship between brand loyalty and brand value needs to be recognised within the management accounting system. It is also suggested that strategic brand management is achieved by having a multi-disciplinary focus, which is facilitated by a common vocabulary. This article seeks to establish the relationships between the constructs and concepts of branding, and to provide a framework and vocabulary that aids effective communication between the functions of accounting and marketing. Performance measures for brand management are also considered, and a model for the management of brand equity is provided. Very simply, brand description (or identity or image) is tailored to the needs and wants of a target market using the marketing mix of product, price, place, and promotion. The success or otherwise of this process determines brand strength or the degree of brand loyalty. A brand's value is determined by the degree of brand loyalty, as this implies a guarantee of future cash flows. Feldwick considered that using the term brand equity creates the illusion that an operational relationship exists between brand description, brand strength and brand value that cannot be demonstrated to operate in practice. This is not surprising, given that brand description and brand strength are, broadly speaking, within the remit of marketers and brand value has been considered largely an accounting issue. However, for brands to be managed strategically as long-term assets, the relationship outlined in Figure 1 needs to be operational within the management accounting system. The efforts of managers of brands could be reviewed and assessed by the measurement of brand strength and brand value, and brand strategy modified accordingly. Whilst not a simple process, the measurement of outcomes is useful as part of a range of diagnostic tools for management. This is further explored in the summary discussion. Whilst there remains a diversity of opinion on the definition and basis of brand equity, most approaches consider brand equity to be a strategic issue, albeit often implicitly. The following discussion explores the range of interpretations of brand equity, showing how they relate to Feldwick's (1996) classification. Ambler and Styles (1996) suggest that managers of brands choose between taking profits today or storing them for the future, with brand equity being the `̀ . . . store of profits to be realised at a later date.'' Their definition follows Srivastava and Shocker (1991) with brand equity suggested as; . . . the aggregation of all accumulated attitudes and behavior patterns in the extended minds of consumers, distribution channels and influence agents, which will enhance future profits and long term cash flow. This definition of brand equity distinguishes the brand asset from its valuation, and falls into Feldwick's (1996) brand strength category of brand equity. This approach is intrinsically strategic in nature, with the emphasis away from short-term profits. Davis (1995) also emphasises the strategic importance of brand equity when he defines brand value (one form of brand equity) as `̀ . . . the potential strategic contributions and benefits that a brand can make to a company.'' In this definition, brand value is the resultant form of brand equity in Figure 1, or the outcome of consumer-based brand equity. Keller (1993) also takes the consumer-based brand strength approach to brand equity, suggesting that brand equity represents a condition in which the consumer is familiar with the brand and recalls some favourable, strong and unique brand associations. Hence, there is a differential effect of brand knowledge on consumer response to the marketing of a brand. This approach is aligned to the relationship described in Figure 1, where brand strength is a function of brand description. Winters (1991) relates brand equity to added value by suggesting that brand equity involves the value added to a product by consumers' associations and perceptions of a particular brand name. It is unclear in what way added value is being used, but brand equity fits the categories of brand description and brand strength as outlined above. Leuthesser (1988) offers a broad definition of brand equity as: the set of associations and behaviour on the part of a brand's customers, channel members and parent corporation that permits the brand to earn greater volume or greater margins than it could without the brand name. This definition covers Feldwick's classifications of brand description and brand strength implying a similar relationship to that outlined in Figure 1. The key difference to Figure 1 is that the outcome of brand strength is not specified as brand value, but implies market share, and profit as outcomes. Marketers tend to describe, rather than ascribe a figure to, the outcomes of brand strength. Pitta and Katsanis (1995) suggest that brand equity increases the probability of brand choice, leads to brand loyalty and `̀ insulates the brand from a measure of competitive threats.'' Aaker (1991) suggests that strong brands will usually provide higher profit margins and better access to distribution channels, as well as providing a broad platform for product line extensions. Brand extension[1] is a commonly cited advantage of high brand equity, with Dacin and Smith (1994) and Keller and Aaker (1992) suggesting that successful brand extensions can also build brand equity. Loken and John (1993) and Aaker (1993) advise caution in that poor brand extensions can erode brand equity. Figure 1 The brand equity chain [ 663 ] Lisa Wood Brands and brand equity: definition and management Management Decision 38/9 [2000] 662±669 Farquhar (1989) suggests a relationship between high brand equity and market power asserting that: The competitive advantage of firms that have brands with high equity includes the opportunity for successful extensions, resilience against competitors' promotional pressures, and creation of barriers to competitive entry. This relationship is summarised in Figure 2. Figure 2 indicates that there can be more than one outcome determined by brand strength apart from brand value. It should be noted that it is argued by Wood (1999) that brand value measurements could be used as an indicator of market power. Achieving a high degree of brand strength may be considered an important objective for managers of brands. If we accept that the relationships highlighted in Figures 1 and 2 are something that we should be aiming for, then it is logical to focus our attention on optimising brand description. This requires a rich understanding of the brand construct itself. Yet, despite an abundance of literature, the definitive brand construct has yet to be produced. Subsequent discussion explores the brand construct itself, and highlights the specific relationship between brands and added value. This relationship is considered to be key to the variety of approaches to brand definition within marketing, and is currently an area of incompatibility between marketing and accounting.", "title": "" }, { "docid": "neg:1840224_2", "text": "We propose a new tree-based ORAM scheme called Circuit ORAM. Circuit ORAM makes both theoretical and practical contributions. From a theoretical perspective, Circuit ORAM shows that the well-known Goldreich-Ostrovsky logarithmic ORAM lower bound is tight under certain parameter ranges, for several performance metrics. Therefore, we are the first to give an answer to a theoretical challenge that remained open for the past twenty-seven years. Second, Circuit ORAM earns its name because it achieves (almost) optimal circuit size both in theory and in practice for realistic choices of block sizes. We demonstrate compelling practical performance and show that Circuit ORAM is an ideal candidate for secure multi-party computation applications.", "title": "" }, { "docid": "neg:1840224_3", "text": "A novel type of dual circular polarizer for simultaneously receiving and transmitting right-hand and left-hand circularly polarized waves is developed and tested. It consists of a H-plane T junction of rectangular waveguide, one circular waveguide as an Eplane arm located on top of the junction, and two metallic pins used for matching. The theoretical analysis and design of the three-physicalport and four-mode polarizer were researched by solving ScatteringMatrix of the network and using a full-wave electromagnetic simulation tool. The optimized polarizer has the advantages of a very compact size with a volume smaller than 0.6λ3, low complexity and manufacturing cost. A couple of the polarizer has been manufactured and tested, and the experimental results are basically consistent with the theories.", "title": "" }, { "docid": "neg:1840224_4", "text": "We present a novel system for sketch-based face image editing, enabling users to edit images intuitively by sketching a few strokes on a region of interest. Our interface features tools to express a desired image manipulation by providing both geometry and color constraints as user-drawn strokes. As an alternative to the direct user input, our proposed system naturally supports a copy-paste mode, which allows users to edit a given image region by using parts of another exemplar image without the need of hand-drawn sketching at all. The proposed interface runs in real-time and facilitates an interactive and iterative workflow to quickly express the intended edits. Our system is based on a novel sketch domain and a convolutional neural network trained end-to-end to automatically learn to render image regions corresponding to the input strokes. To achieve high quality and semantically consistent results we train our neural network on two simultaneous tasks, namely image completion and image translation. To the best of our knowledge, we are the first to combine these two tasks in a unified framework for interactive image editing. Our results show that the proposed sketch domain, network architecture, and training procedure generalize well to real user input and enable high quality synthesis results without additional post-processing.", "title": "" }, { "docid": "neg:1840224_5", "text": "The task of matching patterns in graph-structured data has applications in such diverse areas as computer vision, biology, electronics, computer aided design, social networks, and intelligence analysis. Consequently, work on graph-based pattern matching spans a wide range of research communities. Due to variations in graph characteristics and application requirements, graph matching is not a single problem, but a set of related problems. This paper presents a survey of existing work on graph matching, describing variations among problems, general and specific solution approaches, evaluation techniques, and directions for further research. An emphasis is given to techniques that apply to general graphs with semantic characteristics.", "title": "" }, { "docid": "neg:1840224_6", "text": "Complex disease genetics has been revolutionised in recent years by the advent of genome-wide association (GWA) studies. The chronic inflammatory bowel diseases (IBDs), Crohn's disease and ulcerative colitis have seen notable successes culminating in the discovery of 99 published susceptibility loci/genes (71 Crohn's disease; 47 ulcerative colitis) to date. Approximately one-third of loci described confer susceptibility to both Crohn's disease and ulcerative colitis. Amongst these are multiple genes involved in IL23/Th17 signalling (IL23R, IL12B, JAK2, TYK2 and STAT3), IL10, IL1R2, REL, CARD9, NKX2.3, ICOSLG, PRDM1, SMAD3 and ORMDL3. The evolving genetic architecture of IBD has furthered our understanding of disease pathogenesis. For Crohn's disease, defective processing of intracellular bacteria has become a central theme, following gene discoveries in autophagy and innate immunity (associations with NOD2, IRGM, ATG16L1 are specific to Crohn's disease). Genetic evidence has also demonstrated the importance of barrier function to the development of ulcerative colitis (HNF4A, LAMB1, CDH1 and GNA12). However, when the data are analysed in more detail, deeper themes emerge including the shared susceptibility seen with other diseases. Many immune-mediated diseases overlap in this respect, paralleling the reported epidemiological evidence. However, in several cases the reported shared susceptibility appears at odds with the clinical picture. Examples include both type 1 and type 2 diabetes mellitus. In this review we will detail the presently available data on the genetic overlap between IBD and other diseases. The discussion will be informed by the epidemiological data in the published literature and the implications for pathogenesis and therapy will be outlined. This arena will move forwards very quickly in the next few years. Ultimately, we anticipate that these genetic insights will transform the landscape of common complex diseases such as IBD.", "title": "" }, { "docid": "neg:1840224_7", "text": "Dynamic voltage scaling (DVS), which adjusts the clockspeed and supply voltage dynamically, is an effective techniquein reducing the energy consumption of embedded real-timesystems. The energy efficiency of a DVS algorithm largelydepends on the performance of the slack estimation methodused in it. In this paper, we propose a novel DVS algorithmfor periodic hard real-time tasks based on an improved slackestimation algorithm. Unlike the existing techniques, the proposedmethod takes full advantage of the periodic characteristicsof the real-time tasks under priority-driven schedulingsuch as EDF. Experimental results show that the proposed algorithmreduces the energy consumption by 20~40% over theexisting DVS algorithm. The experiment results also show thatour algorithm based on the improved slack estimation methodgives comparable energy savings to the DVS algorithm basedon the theoretically optimal (but impractical) slack estimationmethod.", "title": "" }, { "docid": "neg:1840224_8", "text": "We present SWARM, a wearable affective technology designed to help a user to reflect on their own emotional state, modify their affect, and interpret the emotional states of others. SWARM aims for a universal design (inclusive of people with various disabilities), with a focus on modular actuation components to accommodate users' sensory capabilities and preferences, and a scarf form-factor meant to reduce the stigma of accessible technologies through a fashionable embodiment. Using an iterative, user-centered approach, we present SWARM's design. Additionally, we contribute findings for communicating emotions through technology actuations, wearable design techniques (including a modular soft circuit design technique that fuses conductive fabric with actuation components), and universal design considerations for wearable technology.", "title": "" }, { "docid": "neg:1840224_9", "text": "Detecting informal settlements might be one of the most challenging tasks within urban remote sensing. This phenomenon occurs mostly in developing countries. In order to carry out the urban planning and development tasks necessary to improve living conditions for the poorest world-wide, an adequate spatial data basis is needed (see Mason, O. S. & Fraser, C. S., 1998). This can only be obtained through the analysis of remote sensing data, which represents an additional challenge from a technical point of view. Formal settlements by definition are mapped sufficiently for most purposes. However, this does not hold for informal settlements. Due to their microstructure and instability of shape, the detection of these settlements is substantially more difficult. Hence, more sophisticated data and methods of image analysis are necessary, which ideally act as a spatial data basis for a further informal settlement management. While these methods are usually quite labour-intensive, one should nonetheless bear in mind cost-effectivity of the applied methods and tools. In the present article, it will be shown how eCognition can be used to detect and discriminate informal settlements from other land-use-forms by describing typical characteristics of colour, texture, shape and context. This software is completely objectoriented and uses a patented, multi-scale image segmentation approach. The generated segments act as image objects whose physical and contextual characteristics can be described by means of fuzzy logic. The article will show methods and strategies using eCognition to detect informal settlements from high resolution space-borne image data such as IKONOS. A final discussion of the results will be given.", "title": "" }, { "docid": "neg:1840224_10", "text": "We provide an alternative to the maximum likelihood method for making inferences about the parameters of the logistic regression model. The method is based appropriate permutational distributions of sufficient statistics. It is useful for analysing small or unbalanced binary data with covariates. It also applies to small-sample clustered binary data. We illustrate the method by analysing several biomedical data sets.", "title": "" }, { "docid": "neg:1840224_11", "text": "Forest species recognition has been traditionally addressed as a texture classification problem, and explored using standard texture methods such as Local Binary Patterns (LBP), Local Phase Quantization (LPQ) and Gabor Filters. Deep learning techniques have been a recent focus of research for classification problems, with state-of-the art results for object recognition and other tasks, but are not yet widely used for texture problems. This paper investigates the usage of deep learning techniques, in particular Convolutional Neural Networks (CNN), for texture classification in two forest species datasets - one with macroscopic images and another with microscopic images. Given the higher resolution images of these problems, we present a method that is able to cope with the high-resolution texture images so as to achieve high accuracy and avoid the burden of training and defining an architecture with a large number of free parameters. On the first dataset, the proposed CNN-based method achieves 95.77% of accuracy, compared to state-of-the-art of 97.77%. On the dataset of microscopic images, it achieves 97.32%, beating the best published result of 93.2%.", "title": "" }, { "docid": "neg:1840224_12", "text": "The Extended String-to-String Correction Problem [ESSCP] is defined as the problem of determining, for given strings A and B over alphabet V, a minimum-cost sequence S of edit operations such that S(A) &equil; B. The sequence S may make use of the operations: <underline>Change, Insert, Delete</underline> and <underline>Swaps</underline>, each of constant cost W<subscrpt>C</subscrpt>, W<subscrpt>I</subscrpt>, W<subscrpt>D</subscrpt>, and W<subscrpt>S</subscrpt> respectively. Swap permits any pair of adjacent characters to be interchanged.\n The principal results of this paper are:\n (1) a brief presentation of an algorithm (the CELLAR algorithm) which solves ESSCP in time Ø(¦A¦* ¦B¦* ¦V¦<supscrpt>s</supscrpt>*s), where s &equil; min(4W<subscrpt>C</subscrpt>, W<subscrpt>I</subscrpt>+W<subscrpt>D</subscrpt>)/W<subscrpt>S</subscrpt> + 1;\n (2) presentation of polynomial time algorithms for the cases (a) W<subscrpt>S</subscrpt> &equil; 0, (b) W<subscrpt>S</subscrpt> > 0, W<subscrpt>C</subscrpt>&equil; W<subscrpt>I</subscrpt>&equil; W<subscrpt>D</subscrpt>&equil; @@@@;\n (3) proof that ESSCP, with W<subscrpt>I</subscrpt> < W<subscrpt>C</subscrpt> &equil; W<subscrpt>D</subscrpt> &equil; @@@@, 0 < W<subscrpt>S</subscrpt> < @@@@, suitably encoded, is NP-complete. (The remaining case, W<subscrpt>S</subscrpt>&equil; @@@@, reduces ESSCP to the string-to-string correction problem of [1], where an Ø( ¦A¦* ¦B¦) algorithm is given.) Thus, “almost all” ESSCP's can be solved in deterministic polynomial time, but the general problem is NP-complete.", "title": "" }, { "docid": "neg:1840224_13", "text": "This paper describes an experimental setup and results of user tests focusing on the perception of temporal characteristics of vibration of a mobile device. The experiment consisted of six vibration stimuli of different length. We asked the subjects to score the subjective perception level in a five point Lickert scale. The results suggest that the optimal duration of the control signal should be between 50 and 200 ms in this specific case. Longer durations were perceived as being irritating.", "title": "" }, { "docid": "neg:1840224_14", "text": "When Big data and cloud computing join forces together, several domains like: healthcare, disaster prediction and decision making become easier and much more beneficial to users in term of information gathering, although cloud computing will reduce time and cost of analyzing information for big data, it may harm the confidentiality and integrity of the sensitive data, for instance, in healthcare, when analyzing disease's spreading area, the name of the infected people must remain secure, hence the obligation to adopt a secure model that protect sensitive data from malicious users. Several case studies on the integration of big data in cloud computing, urge on how easier it would be to analyze and manage big data in this complex envronement. Companies must consider outsourcing their sensitive data to the cloud to take advantage of its beneficial resources such as huge storage, fast calculation, and availability, yet cloud computing might harm the security of data stored and computed in it (confidentiality, integrity). Therefore, strict paradigm must be adopted by organization to obviate their outsourced data from being stolen, damaged or lost. In this paper, we compare between the existing models to secure big data implementation in the cloud computing. Then, we propose our own model to secure Big Data on the cloud computing environement, considering the lifecycle of data from uploading, storage, calculation to its destruction.", "title": "" }, { "docid": "neg:1840224_15", "text": "In this work we propose a hierarchical approach for labeling semantic objects and regions in scenes. Our approach is reminiscent of early vision literature in that we use a decomposition of the image in order to encode relational and spatial information. In contrast to much existing work on structured prediction for scene understanding, we bypass a global probabilistic model and instead directly train a hierarchical inference procedure inspired by the message passing mechanics of some approximate inference procedures in graphical models. This approach mitigates both the theoretical and empirical difficulties of learning probabilistic models when exact inference is intractable. In particular, we draw from recent work in machine learning and break the complex inference process into a hierarchical series of simple machine learning subproblems. Each subproblem in the hierarchy is designed to capture the image and contextual statistics in the scene. This hierarchy spans coarse-to-fine regions and explicitly models the mixtures of semantic labels that may be present due to imperfect segmentation. To avoid cascading of errors and overfitting, we train the learning problems in sequence to ensure robustness to likely errors earlier in the inference sequence and leverage the stacking approach developed by Cohen et al.", "title": "" }, { "docid": "neg:1840224_16", "text": "Digital systems, especially those for mobile applications are sensitive to power consumption, chip size and costs. Therefore they are realized using fixed-point architectures, either dedicated HW or programmable DSPs. On the other hand, system design starts from a floating-point description. These requirements have been the motivation for FRIDGE (Fixed-point pRogrammIng DesiGn Environment), a design environment for the specification, evaluation and implementation of fixed-point systems. FRIDGE offers a seamless design flow from a floating- point description to a fixed-point implementation. Within this paper we focus on two core capabilities of FRIDGE: (1) the concept of an interactive, automated transformation of floating-point programs written in ANSI-C into fixed-point specifications, based on an interpolative approach. The design time reductions that can be achieved make FRIDGE a key component for an efficient HW/SW-CoDesign. (2) a fast fixed-point simulation that performs comprehensive compile-time analyses, reducing simulation time by one order of magnitude compared to existing approaches.", "title": "" }, { "docid": "neg:1840224_17", "text": "Ciphertext-policy attribute-based encryption (CP-ABE) is a promising cryptographic technique that integrates data encryption with access control for ensuring data security in IoT systems. However, the efficiency problem of CP-ABE is still a bottleneck limiting its development and application. A widespread consensus is that the computation overhead of bilinear pairing is excessive in the practical application of ABE, especially for the devices or the processors with limited computational resources and power supply. In this paper, we proposed a novel pairing-free data access control scheme based on CP-ABE using elliptic curve cryptography, abbreviated PF-CP-ABE. We replace complicated bilinear pairing with simple scalar multiplication on elliptic curves, thereby reducing the overall computation overhead. And we designed a new way of key distribution that it can directly revoke a user or an attribute without updating other users’ keys during the attribute revocation phase. Besides, our scheme use linear secret sharing scheme access structure to enhance the expressiveness of the access policy. The security and performance analysis show that our scheme significantly improved the overall efficiency as well as ensured the security.", "title": "" }, { "docid": "neg:1840224_18", "text": "Cyber-Physical Security Testbeds serve as valuable experimental platforms to implement and evaluate realistic, complex cyber attack-defense experiments. Testbeds, unlike traditional simulation platforms, capture communication, control and physical system characteristics and their interdependencies adequately in a unified environment. In this paper, we show how the PowerCyber CPS testbed at Iowa State was used to implement and evaluate cyber attacks on one of the fundamental Wide-Area Control applications, namely, the Automatic Generation Control (AGC). We provide a brief overview of the implementation of the experimental setup on the testbed. We then present a case study using the IEEE 9 bus system to evaluate the impacts of cyber attacks on AGC. Specifically, we analyzed the impacts of measurement based attacks that manipulated the tie-line and frequency measurements, and control based attacks that manipulated the ACE values sent to generators. We found that these attacks could potentially create under frequency conditions and could cause unnecessary load shedding. As part of future work, we plan to extend this work and utilize the experimental setup to implement other sophisticated, stealthy attack vectors and also develop attack-resilient algorithms to detect and mitigate such attacks.", "title": "" }, { "docid": "neg:1840224_19", "text": "Boosting is a general method for improving the accuracy of any given learning algorithm. This short overview paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer from overfitting as well as boosting’s relationship to support-vector machines. Some examples of recent applications of boosting are also described.", "title": "" } ]
1840225
Semantic Interaction for Sensemaking: Inferring Analytical Reasoning for Model Steering
[ { "docid": "pos:1840225_0", "text": "The Sandbox is a flexible and expressive thinking environment that supports both ad-hoc and more formal analytical tasks. It is the evidence marshalling and sensemaking component for the analytical software environment called nSpace. This paper presents innovative Sandbox human information interaction capabilities and the rationale underlying them including direct observations of analysis work as well as structured interviews. Key capabilities for the Sandbox include “put-this-there” cognition, automatic process model templates, gestures for the fluid expression of thought, assertions with evidence and scalability mechanisms to support larger analysis tasks. The Sandbox integrates advanced computational linguistic functions using a Web Services interface and protocol. An independent third party evaluation experiment with the Sandbox has been completed. The experiment showed that analyst subjects using the Sandbox did higher quality analysis in less time than with standard tools. Usability test results indicated the analysts became proficient in using the Sandbox with three hours of training.", "title": "" }, { "docid": "pos:1840225_1", "text": "Visual analytics emphasizes sensemaking of large, complex datasets through interactively exploring visualizations generated by statistical models. For example, dimensionality reduction methods use various similarity metrics to visualize textual document collections in a spatial metaphor, where similarities between documents are approximately represented through their relative spatial distances to each other in a 2D layout. This metaphor is designed to mimic analysts' mental models of the document collection and support their analytic processes, such as clustering similar documents together. However, in current methods, users must interact with such visualizations using controls external to the visual metaphor, such as sliders, menus, or text fields, to directly control underlying model parameters that they do not understand and that do not relate to their analytic process occurring within the visual metaphor. In this paper, we present the opportunity for a new design space for visual analytic interaction, called semantic interaction, which seeks to enable analysts to spatially interact with such models directly within the visual metaphor using interactions that derive from their analytic process, such as searching, highlighting, annotating, and repositioning documents. Further, we demonstrate how semantic interactions can be implemented using machine learning techniques in a visual analytic tool, called ForceSPIRE, for interactive analysis of textual data within a spatial visualization. Analysts can express their expert domain knowledge about the documents by simply moving them, which guides the underlying model to improve the overall layout, taking the user's feedback into account.", "title": "" } ]
[ { "docid": "neg:1840225_0", "text": "This paper describes our construction of named-entity recognition (NER) systems in two Western Iranian languages, Sorani Kurdish and Tajik, as a part of a pilot study of Linguistic Rapid Response to potential emergency humanitarian relief situations. In the absence of large annotated corpora, parallel corpora, treebanks, bilingual lexica, etc., we found the following to be effective: exploiting distributional regularities in monolingual data, projecting information across closely related languages, and utilizing human linguist judgments. We show promising results on both a four-month exercise in Sorani and a two-day exercise in Tajik, achieved with minimal annotation costs.", "title": "" }, { "docid": "neg:1840225_1", "text": "Due to the shift from software-as-a-product (SaaP) to software-as-a-service (SaaS), software components that were developed to run in a single address space must increasingly be accessed remotely across the network. Distribution middleware is frequently used to facilitate this transition. Yet a range of middleware platforms exist, and there are few existing guidelines to help the programmer choose an appropriate middleware platform to achieve desired goals for performance, expressiveness, and reliability. To address this limitation, in this paper we describe a case study of transitioning an Open Service Gateway Initiative (OSGi) service from local to remote access. Our case study compares five remote versions of this service, constructed using different distribution middleware platforms. These platforms are implemented by widely-used commercial technologies or have been proposed as improvements on the state of the art. In particular, we implemented a service-oriented version of our own Remote Batch Invocation abstraction. We compare and contrast these implementations in terms of their respective performance, expressiveness, and reliability. Our results can help remote service programmers make informed decisions when choosing middleware platforms for their applications.", "title": "" }, { "docid": "neg:1840225_2", "text": "In early studies on energy metabolism of tumor cells, it was proposed that the enhanced glycolysis was induced by a decreased oxidative phosphorylation. Since then it has been indiscriminately applied to all types of tumor cells that the ATP supply is mainly or only provided by glycolysis, without an appropriate experimental evaluation. In this review, the different genetic and biochemical mechanisms by which tumor cells achieve an enhanced glycolytic flux are analyzed. Furthermore, the proposed mechanisms that arguably lead to a decreased oxidative phosphorylation in tumor cells are discussed. As the O(2) concentration in hypoxic regions of tumors seems not to be limiting for the functioning of oxidative phosphorylation, this pathway is re-evaluated regarding oxidizable substrate utilization and its contribution to ATP supply versus glycolysis. In the tumor cell lines where the oxidative metabolism prevails over the glycolytic metabolism for ATP supply, the flux control distribution of both pathways is described. The effect of glycolytic and mitochondrial drugs on tumor energy metabolism and cellular proliferation is described and discussed. Similarly, the energy metabolic changes associated with inherent and acquired resistance to radiotherapy and chemotherapy of tumor cells, and those determined by positron emission tomography, are revised. It is proposed that energy metabolism may be an alternative therapeutic target for both hypoxic (glycolytic) and oxidative tumors.", "title": "" }, { "docid": "neg:1840225_3", "text": "University of Tampere School of Management Author: MIIA HANNOLA Title: Critical factors in Customer Relationship Management system implementation Master’s thesis: 84 pages, 2 appendices Date: November 2016", "title": "" }, { "docid": "neg:1840225_4", "text": "We propose a new method for detecting patterns of anomalies in categorical datasets. We assume that anomalies are generated by some underlying process which affects only a particular subset of the data. Our method consists of two steps: we first use a \"local anomaly detector\" to identify individual records with anomalous attribute values, and then detect patterns where the number of anomalous records is higher than expected. Given the set of anomalies flagged by the local anomaly detector, we search over all subsets of the data defined by any set of fixed values of a subset of the attributes, in order to detect self-similar patterns of anomalies. We wish to detect any such subset of the test data which displays a significant increase in anomalous activity as compared to the normal behavior of the system (as indicated by the training data). We perform significance testing to determine if the number of anomalies in any subset of the test data is significantly higher than expected, and propose an efficient algorithm to perform this test over all such subsets of the data. We show that this algorithm is able to accurately detect anomalous patterns in real-world hospital, container shipping and network intrusion data.", "title": "" }, { "docid": "neg:1840225_5", "text": "In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors’ capability for judging semantic similarity. Applying this method to publicly available pre-trained word vectors leads to a new state of the art performance on the SimLex-999 dataset. We also show how the method can be used to tailor the word vector space for the downstream task of dialogue state tracking, resulting in robust improvements across different dialogue domains.", "title": "" }, { "docid": "neg:1840225_6", "text": "This Preview Edition of Designing Data-Intensive Applications, Chapters 1 and 2, is a work in progress. The final book is currently scheduled for release in July 2015 and will be available at oreilly.com and other retailers once it is published. O'Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://safaribooksonline.com). For more information, contact our corporate/ institutional sales department: 800-998-9938 or corporate@oreilly.com. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and O'Reilly Media, Inc. was aware of a trademark claim, the designations have been printed in caps or initial caps. While every precaution has been taken in the preparation of this book, the publisher and authors assume no responsibility for errors or omissions, or for damages resulting from the use of the information contained herein. Thinking About Data Systems 4 Reliability 6 Hardware faults 7 Software errors 8 Human errors 9 How important is reliability? 9 Scalability 10 Describing load 10 Describing performance 13 Approaches for coping with load 16 Maintainability 17 Operability: making life easy for operations 18 Simplicity: managing complexity 19 Plasticity: making change easy 20 Summary 21 Relational Model vs. Document Model 26 The birth of NoSQL 27 The object-relational mismatch 27 Many-to-one and many-to-many relationships 31 Are document databases repeating history? 34 Relational vs. document databases today 36 Query Languages for Data 40 Declarative queries on the web 41 MapReduce querying 43 iii", "title": "" }, { "docid": "neg:1840225_7", "text": "The diagnosis and treatment of chronic patellar instability caused by trochlear dysplasia can be challenging. A dysplastic trochlea leads to biomechanical and kinematic changes that often require surgical correction when symptomatic. In the past, trochlear dysplasia was classified using the 4-part Dejour classification system. More recently, new classification systems have been proposed. Future studies are needed to investigate long-term outcomes after trochleoplasty.", "title": "" }, { "docid": "neg:1840225_8", "text": "Idiom token classification is the task of deciding for a set of potentially idiomatic phrases whether each occurrence of a phrase is a literal or idiomatic usage of the phrase. In this work we explore the use of Skip-Thought Vectors to create distributed representations that encode features that are predictive with respect to idiom token classification. We show that classifiers using these representations have competitive performance compared with the state of the art in idiom token classification. Importantly, however, our models use only the sentence containing the target phrase as input and are thus less dependent on a potentially inaccurate or incomplete model of discourse context. We further demonstrate the feasibility of using these representations to train a competitive general idiom token classifier.", "title": "" }, { "docid": "neg:1840225_9", "text": "Congestive heart failure (CHF) is a common clinical disorder that results in pulmonary vascular congestion and reduced cardiac output. CHF should be considered in the differential diagnosis of any adult patient who presents with dyspnea and/or respiratory failure. The diagnosis of heart failure is often determined by a careful history and physical examination and characteristic chest-radiograph findings. The measurement of serum brain natriuretic peptide and echocardiography have substantially improved the accuracy of diagnosis. Therapy for CHF is directed at restoring normal cardiopulmonary physiology and reducing the hyperadrenergic state. The cornerstone of treatment is a combination of an angiotensin-converting-enzyme inhibitor and slow titration of a beta blocker. Patients with CHF are prone to pulmonary complications, including obstructive sleep apnea, pulmonary edema, and pleural effusions. Continuous positive airway pressure and noninvasive positive-pressure ventilation benefit patients in CHF exacerbations.", "title": "" }, { "docid": "neg:1840225_10", "text": "We present a computer audition system that can both annotate novel audio tracks with semantically meaningful words and retrieve relevant tracks from a database of unlabeled audio content given a text-based query. We consider the related tasks of content-based audio annotation and retrieval as one supervised multiclass, multilabel problem in which we model the joint probability of acoustic features and words. We collect a data set of 1700 human-generated annotations that describe 500 Western popular music tracks. For each word in a vocabulary, we use this data to train a Gaussian mixture model (GMM) over an audio feature space. We estimate the parameters of the model using the weighted mixture hierarchies expectation maximization algorithm. This algorithm is more scalable to large data sets and produces better density estimates than standard parameter estimation techniques. The quality of the music annotations produced by our system is comparable with the performance of humans on the same task. Our ldquoquery-by-textrdquo system can retrieve appropriate songs for a large number of musically relevant words. We also show that our audition system is general by learning a model that can annotate and retrieve sound effects.", "title": "" }, { "docid": "neg:1840225_11", "text": "For doubly fed induction generator (DFIG)-based wind turbine, the main constraint to ride-through serious grid faults is the limited converter rating. In order to realize controllable low voltage ride through (LVRT) under the typical converter rating, transient control reference usually need to be modified to adapt to the constraint of converter's maximum output voltage. Generally, the generation of such reference relies on observation of stator flux and even sequence separation. This is susceptible to observation errors during the fault transient; moreover, it increases the complexity of control system. For this issue, this paper proposes a scaled current tracking control for rotor-side converter (RSC) to enhance its LVRT capacity without flux observation. In this method, rotor current is controlled to track stator current in a certain scale. Under proper tracking coefficient, both the required rotor current and rotor voltage can be constrained within the permissible ranges of RSC, thus it can maintain DFIG under control to suppress overcurrent and overvoltage. Moreover, during fault transient, electromagnetic torque oscillations can be greatly suppressed. Based on it, certain additional positive-sequence item is injected into rotor current reference to supply dynamic reactive support. Simulation and experimental results demonstrate the feasibility of the proposed method.", "title": "" }, { "docid": "neg:1840225_12", "text": "Collision detection is of paramount importance for many applications in computer graphics and visualization. Typically, the input to a collision detection algorithm is a large number of geometric objects comprising an environment, together with a set of objects moving within the environment. In addition to determining accurately the contacts that occur between pairs of objects, one needs also to do so at real-time rates. Applications such as haptic force-feedback can require over 1,000 collision queries per second. In this paper, we develop and analyze a method, based on bounding-volume hierarchies, for efficient collision detection for objects moving within highly complex environments. Our choice of bounding volume is to use a “discrete orientation polytope” (“k-dop”), a convex polytope whose facets are determined by halfspaces whose outward normals come from a small fixed set of k orientations. We compare a variety of methods for constructing hierarchies (“BV-trees”) of bounding k-dops. Further, we propose algorithms for maintaining an effective BV-tree of k-dops for moving objects, as they rotate, and for performing fast collision detection using BV-trees of the moving objects and of the environment. Our algorithms have been implemented and tested. We provide experimental evidence showing that our approach yields substantially faster collision detection than previous methods.", "title": "" }, { "docid": "neg:1840225_13", "text": "Giant congenital melanocytic nevus is usually defined as a melanocytic lesion present at birth that will reach a diameter ≥ 20 cm in adulthood. Its incidence is estimated in <1:20,000 newborns. Despite its rarity, this lesion is important because it may associate with severe complications such as malignant melanoma, affect the central nervous system (neurocutaneous melanosis), and have major psychosocial impact on the patient and his family due to its unsightly appearance. Giant congenital melanocytic nevus generally presents as a brown lesion, with flat or mammilated surface, well-demarcated borders and hypertrichosis. Congenital melanocytic nevus is primarily a clinical diagnosis. However, congenital nevi are histologically distinguished from acquired nevi mainly by their larger size, the spread of the nevus cells to the deep layers of the skin and by their more varied architecture and morphology. Although giant congenital melanocytic nevus is recognized as a risk factor for the development of melanoma, the precise magnitude of this risk is still controversial. The estimated lifetime risk of developing melanoma varies from 5 to 10%. On account of these uncertainties and the size of the lesions, the management of giant congenital melanocytic nevus needs individualization. Treatment may include surgical and non-surgical procedures, psychological intervention and/or clinical follow-up, with special attention to changes in color, texture or on the surface of the lesion. The only absolute indication for surgery in giant congenital melanocytic nevus is the development of a malignant neoplasm on the lesion.", "title": "" }, { "docid": "neg:1840225_14", "text": "Background The life sciences, biomedicine and health care are increasingly turning into a data intensive science [2-4]. Particularly in bioinformatics and computational biology we face not only increased volume and a diversity of highly complex, multi-dimensional and often weaklystructured and noisy data [5-8], but also the growing need for integrative analysis and modeling [9-14]. Due to the increasing trend towards personalized and precision medicine (P4 medicine: Predictive, Preventive, Participatory, Personalized [15]), biomedical data today results from various sources in different structural dimensions, ranging from the microscopic world, and in particular from the omics world (e.g., from genomics, proteomics, metabolomics, lipidomics, transcriptomics, epigenetics, microbiomics, fluxomics, phenomics, etc.) to the macroscopic world (e.g., disease spreading data of populations in public health informatics), see Figure 1[16]. Just for rapid orientation in terms of size: the Glucose molecule has a size of 900 pm = 900× 10−12m and the Carbon atom approx. 300 pm . A hepatitis virus is relatively large with 45nm = 45× 10−9m and the X-Chromosome much bigger with 7μm = 7× 10−6m . We produce most of the “Big Data” in the omics world, we estimate many Terabytes ( 1TB = 1× 10 Byte = 1000 GByte) of genomics data in each individual, consequently, the fusion of these with Petabytes of proteomics data for personalized medicine results in Exabytes of data (1 EB = 1× 1018 Byte ). Last but not least, this “natural” data is then fused together with “produced” data, e.g., the unstructured information (text) in the patient records, wellness data, the data from physiological sensors, laboratory data etc. these data are also rapidly increasing in size and complexity. Besides the problem of heterogeneous and distributed data, we are confronted with noisy, missing and inconsistent data. This leaves a large gap between the available “dirty” data [17] and the machinery to effectively process the data for the application purposes; moreover, the procedures of data integration and information extraction may themselves introduce errors and artifacts in the data [18]. Although, one may argue that “Big Data” is a buzz word, systematic and comprehensive exploration of all these data is often seen as the fourth paradigm in the investigation of nature after empiricism, theory and computation [19], and provides a mechanism for data driven hypotheses generation, optimized experiment planning, precision medicine and evidence-based medicine. The challenge is not only to extract meaningful information from this data, but to gain knowledge, to discover previously unknown insight, look for patterns, and to make sense of the data [20], [21]. Many different approaches, including statistical and graph theoretical methods, data mining, and machine learning methods, have been applied in the past however with partly unsatisfactory success [22,23] especially in terms of performance [24]. The grand challenge is to make data useful to and useable by the end user [25]. Maybe, the key challenge is interaction, due to the fact that it is the human end user who possesses the problem solving intelligence [26], hence the ability to ask intelligent questions about the data. The problem in the life sciences is that (biomedical) data models are characterized by significant complexity [27], [28], making manual analysis by the end users difficult and often impossible [29]. At the same time, human * Correspondence: a.holzinger@tugraz.at Research Unit Human-Computer Interaction, Austrian IBM Watson Think Group, Institute for Medical Informatics, Statistics & Documentation, Medical University Graz, Austria Full list of author information is available at the end of the article Holzinger et al. BMC Bioinformatics 2014, 15(Suppl 6):I1 http://www.biomedcentral.com/1471-2105/15/S6/I1", "title": "" }, { "docid": "neg:1840225_15", "text": "Risk management in global information technology (IT) projects is becoming a critical area of concern for practitioners. Global IT projects usually span multiple locations involving various culturally diverse groups that use multiple standards and technologies. These multiplicities cause dynamic risks through interactions among internal (i.e., people, process, and technology) and external elements (i.e., business and natural environments) of global IT projects. This study proposes an agile risk-management framework for global IT project settings. By analyzing the dynamic interactions among multiplicities (e.g., multi-locations, multi-cultures, multi-groups, and multi-interests) embedded in the project elements, we identify the dynamic risks threatening the success of a global IT project. Adopting the principles of service-oriented architecture (SOA), we further propose a set of agile management strategies for mitigating the dynamic risks. The mitigation strategies are conceptually validated. The proposed framework will help practitioners understand the potential risks in their global IT projects and resolve their complex situations when certain types of dynamic risks arise.", "title": "" }, { "docid": "neg:1840225_16", "text": "Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available and that the constraint gradients are sparse. We discuss an SQP algorithm that uses a smooth augmented Lagrangian merit function and makes explicit provision for infeasibility in the original problem and the QP subproblems. SNOPT is a particular implementation that makes use of a semidefinite QP solver. It is based on a limited-memory quasi-Newton approximation to the Hessian of the Lagrangian and uses a reduced-Hessian algorithm (SQOPT) for solving the QP subproblems. It is designed for problems with many thousands of constraints and variables but a moderate number of degrees of freedom (say, up to 2000). An important application is to trajectory optimization in the aerospace industry. Numerical results are given for most problems in the CUTE and COPS test collections (about 900 examples).", "title": "" }, { "docid": "neg:1840225_17", "text": "We characterize the singular values of the linear transformation associated with a convolution applied to a two-dimensional feature map with multiple channels. Our characterization enables efficient computation of the singular values of convolutional layers used in popular deep neural network architectures. It also leads to an algorithm for projecting a convolutional layer onto the set of layers obeying a bound on the operator norm of the layer. We show that this is an effective regularizer; periodically applying these projections during training improves the test error of a residual network on CIFAR-10 from 6.2% to 5.3%.", "title": "" }, { "docid": "neg:1840225_18", "text": "What service-quality attributes must Internet banks offer to induce consumers to switch to online transactions and keep using them?", "title": "" }, { "docid": "neg:1840225_19", "text": "This paper present a circular patch microstrip array antenna operate in KU-band (10.9GHz – 17.25GHz). The proposed circular patch array antenna will be in light weight, flexible, slim and compact unit compare with current antenna used in KU-band. The paper also presents the detail steps of designing the circular patch microstrip array antenna. An Advance Design System (ADS) software is used to compute the gain, power, radiation pattern, and S11 of the antenna. The proposed Circular patch microstrip array antenna basically is a phased array consisting of ‘n’ elements (circular patch antennas) arranged in a rectangular grid. The size of each element is determined by the operating frequency. The incident wave from satellite arrives at the plane of the antenna with equal phase across the surface of the array. Each ‘n’ element receives a small amount of power in phase with the others. There are feed network connects each element to the microstrip lines with an equal length, thus the signals reaching the circular patches are all combined in phase and the voltages add up. The significant difference of the circular patch array antenna is not come in the phase across the surface but in the magnitude distribution. Keywords—Circular patch microstrip array antenna, gain, radiation pattern, S-Parameter.", "title": "" } ]
1840226
A Semantic Loss Function for Deep Learning Under Weak Supervision ∗
[ { "docid": "pos:1840226_0", "text": "Methods based on representation learning currently hold the state-of-the-art in many natural language processing and knowledge base inference tasks. Yet, a major challenge is how to efficiently incorporate commonsense knowledge into such models. A recent approach regularizes relation and entity representations by propositionalization of first-order logic rules. However, propositionalization does not scale beyond domains with only few entities and rules. In this paper we present a highly efficient method for incorporating implication rules into distributed representations for automated knowledge base construction. We map entity-tuple embeddings into an approximately Boolean space and encourage a partial ordering over relation embeddings based on implication rules mined from WordNet. Surprisingly, we find that the strong restriction of the entity-tuple embedding space does not hurt the expressiveness of the model and even acts as a regularizer that improves generalization. By incorporating few commonsense rules, we achieve an increase of 2 percentage points mean average precision over a matrix factorization baseline, while observing a negligible increase in runtime.", "title": "" }, { "docid": "pos:1840226_1", "text": "Traditional approaches to knowledge base completion have been based on symbolic representations. Lowdimensional vector embedding models proposed recently for this task are attractive since they generalize to possibly unlimited sets of relations. A significant drawback of previous embedding models for KB completion is that they merely support reasoning on individual relations (e.g., bornIn(X,Y )⇒ nationality(X,Y )). In this work, we develop models for KB completion that support chains of reasoning on paths of any length using compositional vector space models. We construct compositional vector representations for the paths in the KB graph from the semantic vector representations of the binary relations in that path and perform inference directly in the vector space. Unlike previous methods, our approach can generalize to paths that are unseen in training and, in a zero-shot setting, predict target relations without supervised training data for that relation.", "title": "" } ]
[ { "docid": "neg:1840226_0", "text": "Stress has effect on speech characteristics and can influence the quality of speech. In this paper, we study the effect of SleepDeprivation (SD) on speech characteristics and classify Normal Speech (NS) and Sleep Deprived Speech (SDS). One of the indicators of sleep deprivation is ‘flattened voice’. We examine pitch and harmonic locations to analyse flatness of voice. To investigate, we compute the spectral coefficients that can capture the variations of pitch and harmonic patterns. These are derived using Two-Layer Cascaded-Subband Filter spread according to the pitch and harmonic frequency scale. Hidden Markov Model (HMM) is employed for statistical modeling. We use DCIEM map task corpus to conduct experiments. The analysis results show that SDS has less variation of pitch and harmonic pattern than NS. In addition, we achieve the relatively high accuracy for classification of Normal Speech (NS) and Sleep Deprived Speech (SDS) using proposed spectral coefficients.", "title": "" }, { "docid": "neg:1840226_1", "text": "The Open Networking Foundation's Extensibility Working Group is standardizing OpenFlow, the main software-defined networking (SDN) protocol. To address the requirements of a wide range of network devices and to accommodate its all-volunteer membership, the group has made the specification process highly dynamic and similar to that of open source projects.", "title": "" }, { "docid": "neg:1840226_2", "text": "Wireless access networks scale by replicating base stations geographically and then allowing mobile clients to seamlessly \"hand off\" from one station to the next as they traverse the network. However, providing the illusion of continuous connectivity requires selecting the right moment to handoff and the right base station to transfer to. Unfortunately, 802.11-based networks only attempt a handoff when a client's service degrades to a point where connectivity is threatened. Worse, the overhead of scanning for nearby base stations is routinely over 250 ms - during which incoming packets are dropped - far longer than what can be tolerated by highly interactive applications such as voice telephony. In this paper we describe SyncScan, a low-cost technique for continuously tracking nearby base stations by synchronizing short listening periods at the client with periodic transmissions from each base station. We have implemented this SyncScan algorithm using commodity 802.11 hardware and we demonstrate that it allows better handoff decisions and over an order of magnitude improvement in handoff delay. Finally, our approach only requires trivial implementation changes, is incrementally deployable and is completely backward compatible with existing 802.11 standards.", "title": "" }, { "docid": "neg:1840226_3", "text": "Online Social Networks (OSNs) have spread at stunning speed over the past decade. They are now a part of the lives of dozens of millions of people. The onset of OSNs has stretched the traditional notion of community to include groups of people who have never met in person but communicate with each other through OSNs to share knowledge, opinions, interests and activities. Here we explore in depth language independent gender classification. Our approach predicts gender using five colorbased features extracted from Twitter profiles such as the background color in a user’s profile page. This is in contrast with most existing methods for gender prediction that are language dependent. Those methods use high-dimensional spaces consisting of unique words extracted from such text fields as postings, user names, and profile descriptions. Our approach is independent of the user’s language, efficient, scalable, and computationally tractable, while attaining a good level of accuracy.", "title": "" }, { "docid": "neg:1840226_4", "text": "Investigations of the relationship between pain conditions and psychopathology have largely focused on depression and have been limited by the use of non-representative samples (e.g. clinical samples). The present study utilized data from the Midlife Development in the United States Survey (MIDUS) to investigate associations between three pain conditions and three common psychiatric disorders in a large sample (N = 3,032) representative of adults aged 25-74 in the United States population. MIDUS participants provided reports regarding medical conditions experienced over the past year including arthritis, migraine, and back pain. Participants also completed several diagnostic-specific measures from the Composite International Diagnostic Interview-Short Form [Int. J. Methods Psychiatr. Res. 7 (1998) 171], which was based on the revised third edition of the Diagnostic and Statistical Manual of Mental Disorders [American Psychiatric Association 1987]. The diagnoses included were depression, panic attacks, and generalized anxiety disorder. Logistic regression analyses revealed significant positive associations between each pain condition and the psychiatric disorders (Odds Ratios ranged from 1.48 to 3.86). The majority of these associations remained statistically significant after adjusting for demographic variables, the other pain conditions, and other medical conditions. Given the emphasis on depression in the pain literature, it was noteworthy that the associations between the pain conditions and the anxiety disorders were generally larger than those between the pain conditions and depression. These findings add to a growing body of evidence indicating that anxiety disorders warrant further attention in relation to pain. The clinical and research implications of these findings are discussed.", "title": "" }, { "docid": "neg:1840226_5", "text": "Wearing comfort of clothing is dependent on air permeability, moisture absorbency and wicking properties of fabric, which are related to the porosity of fabric. In this work, a plug-in is developed using Python script and incorporated in Abaqus/CAE for the prediction of porosity of plain weft knitted fabrics. The Plug-in is able to automatically generate 3D solid and multifilament weft knitted fabric models and accurately determine the porosity of fabrics in two steps. In this work, plain weft knitted fabrics made of monofilament, multifilament and spun yarn made of staple fibers were used to evaluate the effectiveness of the developed plug-in. In the case of staple fiber yarn, intra yarn porosity was considered in the calculation of porosity. The first step is to develop a 3D geometrical model of plain weft knitted fabric and the second step is to calculate the porosity of the fabric by using the geometrical parameter of 3D weft knitted fabric model generated in step one. The predicted porosity of plain weft knitted fabric is extracted in the second step and is displayed in the message area. The predicted results obtained from the plug-in have been compared with the experimental results obtained from previously developed models; they agreed well.", "title": "" }, { "docid": "neg:1840226_6", "text": "Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. However, little is known on when MTL works and whether there are data characteristics that help to determine the success of MTL. In this paper we evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine different auxiliary task configurations, amongst which a novel setup, and correlate their impact to data-dependent conditions. Our results show that MTL is not always effective, because significant improvements are obtained only for 1 out of 5 tasks. When successful, auxiliary tasks with compact and more uniform label distributions are preferable.", "title": "" }, { "docid": "neg:1840226_7", "text": "Segmentation of pulmonary X-ray computed tomography (CT) images is a precursor to most pulmonary image analysis applications. This paper presents a fully automatic method for identifying the lungs in three-dimensional (3-D) pulmonary X-ray CT images. The method has three main steps. First, the lung region is extracted from the CT images by gray-level thresholding. Then, the left and right lungs are separated by identifying the anterior and posterior junctions by dynamic programming. Finally, a sequence of morphological operations is used to smooth the irregular boundary along the mediastinum in order to obtain results consistent with these obtained by manual analysis, in which only the most central pulmonary arteries are excluded from the lung region. The method has been tested by processing 3-D CT data sets from eight normal subjects, each imaged three times at biweekly intervals with lungs at 90% vital capacity. The authors present results by comparing their automatic method to manually traced borders from two image analysts. Averaged over all volumes, the root mean square difference between the computer and human analysis is 0.8 pixels (0.54 mm). The mean intrasubject change in tissue content over the three scans was 2.75%/spl plusmn/2.29% (mean/spl plusmn/standard deviation).", "title": "" }, { "docid": "neg:1840226_8", "text": "In machine learning, data augmentation is the process of creating synthetic examples in order to augment a dataset used to learn a model. One motivation for data augmentation is to reduce the variance of a classifier, thereby reducing error. In this paper, we propose new data augmentation techniques specifically designed for time series classification, where the space in which they are embedded is induced by Dynamic Time Warping (DTW). The main idea of our approach is to average a set of time series and use the average time series as a new synthetic example. The proposed methods rely on an extension of DTW Barycentric Averaging (DBA), the averaging technique that is specifically developed for DTW. In this paper, we extend DBA to be able to calculate a weighted average of time series under DTW. In this case, instead of each time series contributing equally to the final average, some can contribute more than others. This extension allows us to generate an infinite number of new examples from any set of given time series. To this end, we propose three methods that choose the weights associated to the time series of the dataset. We carry out experiments on the 85 datasets of the UCR archive and demonstrate that our method is particularly useful when the number of available examples is limited (e.g. 2 to 6 examples per class) using a 1-NN DTW classifier. Furthermore, we show that augmenting full datasets is beneficial in most cases, as we observed an increase of accuracy on 56 datasets, no effect on 7 and a slight decrease on only 22.", "title": "" }, { "docid": "neg:1840226_9", "text": "Distributed applications use replication, implemented by protocols like Paxos, to ensure data availability and transparently mask server failures. This paper presents a new approach to achieving replication in the data center without the performance cost of traditional methods. Our work carefully divides replication responsibility between the network and protocol layers. The network orders requests but does not ensure reliable delivery – using a new primitive we call ordered unreliable multicast (OUM). Implementing this primitive can be achieved with near-zero-cost in the data center. Our new replication protocol, NetworkOrdered Paxos (NOPaxos), exploits network ordering to provide strongly consistent replication without coordination. The resulting system not only outperforms both latencyand throughput-optimized protocols on their respective metrics, but also yields throughput within 2% and latency within 16 μs of an unreplicated system – providing replication without the performance cost.", "title": "" }, { "docid": "neg:1840226_10", "text": "We propose a novel approach for unsupervised zero-shot learning (ZSL) of classes based on their names. Most existing unsupervised ZSL methods aim to learn a model for directly comparing image features and class names. However, this proves to be a difficult task due to dominance of non-visual semantics in underlying vector-space embeddings of class names. To address this issue, we discriminatively learn a word representation such that the similarities between class and combination of attribute names fall in line with the visual similarity. Contrary to the traditional zero-shot learning approaches that are built upon attribute presence, our approach bypasses the laborious attributeclass relation annotations for unseen classes. In addition, our proposed approach renders text-only training possible, hence, the training can be augmented without the need to collect additional image data. The experimental results show that our method yields state-of-the-art results for unsupervised ZSL in three benchmark datasets.", "title": "" }, { "docid": "neg:1840226_11", "text": "We present a simplified and novel fully convolutional neural network (CNN) architecture for semantic pixel-wise segmentation named as SCNet. Different from current CNN pipelines, proposed network uses only convolution layers with no pooling layer. The key objective of this model is to offer a more simplified CNN model with equal benchmark performance and results. It is an encoder-decoder based fully convolution network model. Encoder network is based on VGG 16-layer while decoder networks use upsampling and deconvolution units followed by a pixel-wise classification layer. The proposed network is simple and offers reduced search space for segmentation by using low-resolution encoder feature maps. It also offers a great deal of reduction in trainable parameters due to reusing encoder layer's sparse features maps. The proposed model offers outstanding performance and enhanced results in terms of architectural simplicity, number of trainable parameters and computational time.", "title": "" }, { "docid": "neg:1840226_12", "text": "Due to the reasonably acceptable performance of state-of-the-art object detectors, tracking-by-detection is a standard strategy for visual multi-object tracking (MOT). In particular, online MOT is more demanding due to its diverse applications in time-critical situations. A main issue of realizing online MOT is how to associate noisy object detection results on a new frame with previously being tracked objects. In this work, we propose a multi-object tracker method called CRF-boosting which utilizes a hybrid data association method based on online hybrid boosting facilitated by a conditional random field (CRF) for establishing online MOT. For data association, learned CRF is used to generate reliable low-level tracklets and then these are used as the input of the hybrid boosting. To do so, while existing data association methods based on boosting algorithms have the necessity of training data having ground truth information to improve robustness, CRF-boosting ensures sufficient robustness without such information due to the synergetic cascaded learning procedure. Further, a hierarchical feature association framework is adopted to further improve MOT accuracy. From experimental results on public datasets, we could conclude that the benefit of proposed hybrid approach compared to the other competitive MOT systems is noticeable.", "title": "" }, { "docid": "neg:1840226_13", "text": "Price forecasting is becoming increasingly relevant to producers and consumers in the new competitive electric power markets. Both for spot markets and long-term contracts, price forecasts are necessary to develop bidding strategies or negotiation skills in order to maximize benefit. This paper provides a method to predict next-day electricity prices based on the ARIMA methodology. ARIMA techniques are used to analyze time series and, in the past, have been mainly used for load forecasting due to their accuracy and mathematical soundness. A detailed explanation of the aforementioned ARIMA models and results from mainland Spain and Californian markets are presented.", "title": "" }, { "docid": "neg:1840226_14", "text": "Multiplier, being a very vital part in the design of microprocessor, graphical systems, multimedia systems, DSP system etc. It is very important to have an efficient design in terms of performance, area, speed of the multiplier, and for the same Booth's multiplication algorithm provides a very fundamental platform for all the new advances made for high end multipliers meant for faster multiplication with higher performance. The algorithm provides an efficient encoding of the bits during the first steps of the multiplication process. In pursuit of the same, Radix 4 booths encoding has increased the performance of the multiplier by reducing the number of partial products generated. Radix 4 Booths algorithm produces both positive and negative partial products and implementing the negative partial product nullifies the advances made in different units to some extent if not fully. Most of the research work focuses on the reduction of the number of partial products generated and making efficient implementation of the algorithm. There is very little work done on disposal of the negative partial products generated. The presented work in the paper addresses the issue of disposal of the negative partial products efficiently by computing the 2's complement avoiding the additional adder for adding 1 and generation of long carry chain, hence. The proposed mechanism also continues to support the concept of reducing the partial product and in persuasion of the same it is able to reduce the number of partial product and also improved further from n/2 +1 partial products achieved via modified booths algorithm to n/2. Also, while implementing the proposed mechanism using Verilog HDL, a mode selection capability is provided, enabling the same hardware to act as multiplier and as a simple two's complement calculator using the proposed mechanism. The proposed technique has added advantage in terms of its independentness of the number of bits to be multiplied. It is tested and verified with varied test vectors of different number bit sets. Xilinx synthesis tool is used for synthesis and the multiplier mechanism has a maximum operating frequency of 14.59 MHz and a delay of 7.013 ns.", "title": "" }, { "docid": "neg:1840226_15", "text": "Central to all human interaction is the mutual understanding of emotions, achieved primarily by a set of biologically rooted social signals evolved for this purpose-facial expressions of emotion. Although facial expressions are widely considered to be the universal language of emotion, some negative facial expressions consistently elicit lower recognition levels among Eastern compared to Western groups (see [4] for a meta-analysis and [5, 6] for review). Here, focusing on the decoding of facial expression signals, we merge behavioral and computational analyses with novel spatiotemporal analyses of eye movements, showing that Eastern observers use a culture-specific decoding strategy that is inadequate to reliably distinguish universal facial expressions of \"fear\" and \"disgust.\" Rather than distributing their fixations evenly across the face as Westerners do, Eastern observers persistently fixate the eye region. Using a model information sampler, we demonstrate that by persistently fixating the eyes, Eastern observers sample ambiguous information, thus causing significant confusion. Our results question the universality of human facial expressions of emotion, highlighting their true complexity, with critical consequences for cross-cultural communication and globalization.", "title": "" }, { "docid": "neg:1840226_16", "text": "Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called \"laboratory errors\", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term \"laboratory error\" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes in pre- and post-examination steps must be minimized to guarantee the total quality of laboratory services.", "title": "" }, { "docid": "neg:1840226_17", "text": "Neutrophils infiltration/activation following wound induction marks the early inflammatory response in wound repair. However, the role of the infiltrated/activated neutrophils in tissue regeneration/proliferation during wound repair is not well understood. Here, we report that infiltrated/activated neutrophils at wound site release pyruvate kinase M2 (PKM2) by its secretive mechanisms during early stages of wound repair. The released extracellular PKM2 facilitates early wound healing by promoting angiogenesis at wound site. Our studies reveal a new and important molecular linker between the early inflammatory response and proliferation phase in tissue repair process.", "title": "" }, { "docid": "neg:1840226_18", "text": "This paper describes a framework for the estimation of shape from sparse or incomplete range data. It uses a shape representation called blending, which allows for the geometric combination of shapes into a unified model— selected regions of the component shapes are cut-out and glued together. Estimation of shape using this representation is realized using a physics-based framework, and also includes a process for deciding how to adapt the structure and topology of the model to improve the fit. The blending representation helps avoid abrupt changes in model geometry during fitting by allowing the smooth evolution of the shape, which improves the robustness of the technique. We demonstrate this framework with a series of experiments showing its ability to automatically extract structured representations from range data given both structurally and topologically complex objects. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-97-12. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/47 (appeared inIEEE Transactions on Pattern Analysis and Machine Intelligence , Vol. 20, No. 11, pp. 1186-1205, November 1998) Shape Evolution with Structural and Topological Changes using Blending Douglas DeCarlo and Dimitris Metaxas †", "title": "" }, { "docid": "neg:1840226_19", "text": "Opinion mining is considered as a subfield of natural language processing, information retrieval and text mining. Opinion mining is the process of extracting human thoughts and perceptions from unstructured texts, which with regard to the emergence of online social media and mass volume of users’ comments, has become to a useful, attractive and also challenging issue. There are varieties of researches with different trends and approaches in this area, but the lack of a comprehensive study to investigate them from all aspects is tangible. In this paper we represent a complete, multilateral and systematic review of opinion mining and sentiment analysis to classify available methods and compare their advantages and drawbacks, in order to have better understanding of available challenges and solutions to clarify the future direction. For this purpose, we present a proper framework of opinion mining accompanying with its steps and levels and then we completely monitor, classify, summarize and compare proposed techniques for aspect extraction, opinion classification, summary production and evaluation, based on the major validated scientific works. In order to have a better comparison, we also propose some factors in each category, which help to have a better understanding of advantages and disadvantages of different methods.", "title": "" } ]
1840227
Improving Optical Character Recognition Process for Low Resolution Images
[ { "docid": "pos:1840227_0", "text": "This paper explores the use of clickthrough data for query spelling correction. First, large amounts of query-correction pairs are derived by analyzing users' query reformulation behavior encoded in the clickthrough data. Then, a phrase-based error model that accounts for the transformation probability between multi-term phrases is trained and integrated into a query speller system. Experiments are carried out on a human-labeled data set. Results show that the system using the phrase-based error model outperforms significantly its baseline systems.", "title": "" } ]
[ { "docid": "neg:1840227_0", "text": "In competitive games where players' skill levels are mis-matched, the play experience can be unsatisfying for both stronger and weaker players. Player balancing provides assistance for less-skilled players in order to make games more competitive and engaging. Although player balancing can be seen in many real-world games, there is little work on the design and effectiveness of these techniques outside of shooting games. In this paper we provide new knowledge about player balancing in the popular and competitive rac-ing genre. We studied issues of noticeability and balancing effectiveness in a prototype racing game, and tested the effects of several balancing techniques on performance and play experience. The techniques significantly improved the balance of player performance, were preferred by both experts and novices, increased novices' feelings of competi-tiveness, and did not detract from experts' experience. Our results provide new understanding of the design and use of player balancing for racing games, and provide novel tech-niques that can also be applied to other genres.", "title": "" }, { "docid": "neg:1840227_1", "text": "In this paper, we propose a workflow and a machine learning model for recognizing handwritten characters on form document. The learning model is based on Convolutional Neural Network (CNN) as a powerful feature extraction and Support Vector Machines (SVM) as a high-end classifier. The proposed method is more efficient than modifying the CNN with complex architecture. We evaluated some SVM and found that the linear SVM using L1 loss function and L2 regularization giving the best performance both of the accuracy rate and the computation time. Based on the experiment results using data from NIST SD 192nd edition both for training and testing, the proposed method which combines CNN and linear SVM using L1 loss function and L2 regularization achieved a recognition rate better than only CNN. The recognition rate achieved by the proposed method are 98.85% on numeral characters, 93.05% on uppercase characters, 86.21% on lowercase characters, and 91.37% on the merger of numeral and uppercase characters. While the original CNN achieves an accuracy rate of 98.30% on numeral characters, 92.33% on uppercase characters, 83.54% on lowercase characters, and 88.32% on the merger of numeral and uppercase characters. The proposed method was also validated by using ten folds cross-validation, and it shows that the proposed method still can improve the accuracy rate. The learning model was used to construct a handwriting recognition system to recognize a more challenging data on form document automatically. The pre-processing, segmentation and character recognition are integrated into one system. The output of the system is converted into an editable text. The system gives an accuracy rate of 83.37% on ten different test form document.", "title": "" }, { "docid": "neg:1840227_2", "text": "Since Sharir and Pnueli, algorithms for context-sensitivity have been defined in terms of ‘valid’ paths in an interprocedural flow graph. The definition of valid paths requires atomic call and ret statements, and encapsulated procedures. Thus, the resulting algorithms are not directly applicable when behavior similar to call and ret instructions may be realized using non-atomic statements, or when procedures do not have rigid boundaries, such as with programs in low level languages like assembly or RTL. We present a framework for context-sensitive analysis that requires neither atomic call and ret instructions, nor encapsulated procedures. The framework presented decouples the transfer of control semantics and the context manipulation semantics of statements. A new definition of context-sensitivity, called stack contexts, is developed. A stack context, which is defined using trace semantics, is more general than Sharir and Pnueli’s interprocedural path based calling-context. An abstract interpretation based framework is developed to reason about stack-contexts and to derive analogues of calling-context based algorithms using stack-context. The framework presented is suitable for deriving algorithms for analyzing binary programs, such as malware, that employ obfuscations with the deliberate intent of defeating automated analysis. The framework is used to create a context-sensitive version of Venable et al.’s algorithm for detecting obfuscated calls in x86 binaries. Experimental results from comparing context insensitive, Sharir and Pnueli’s callingcontext-sensitive, and stack-context-sensitive versions of the algorithm are presented.", "title": "" }, { "docid": "neg:1840227_3", "text": "This paper describes a Di e-Hellman based encryption scheme, DHAES. The scheme is as e cient as ElGamal encryption, but has stronger security properties. Furthermore, these security properties are proven to hold under appropriate assumptions on the underlying primitive. We show that DHAES has not only the \\basic\" property of secure encryption (namely privacy under a chosen-plaintext attack) but also achieves privacy under both non-adaptive and adaptive chosenciphertext attacks. (And hence it also achieves non-malleability.) DHAES is built in a generic way from lower-level primitives: a symmetric encryption scheme, a message authentication code, group operations in an arbitrary group, and a cryptographic hash function. In particular, the underlying group may be an elliptic-curve group or the multiplicative group of integers modulo a prime number. The proofs of security are based on appropriate assumptions about the hardness of the Di e-Hellman problem and the assumption that the underlying symmetric primitives are secure. The assumptions are all standard in the sense that no random oracles are involved. We suggest that DHAES provides an attractive starting point for developing public-key encryption standards based on the Di e-Hellman assumption.", "title": "" }, { "docid": "neg:1840227_4", "text": "We propose a low-cost and reconfigurable plasma frequency selective surface (PFSS) with fluorescent lamps for Global Navigation Satellite System (GNSS) applications. Although many antenna topologies exist for broadband operation, antennas with plasma frequency selective surfaces are limited and offer certain advantages. Especially for GNSS applications, antenna performance can be demanding given the limited space for the antenna. In almost all GNSS antennas, a perfect electric conductor (PEC) ground is utilized. In this study, we compare PEC and PFSS reflectors over entire GNSS bandwidth. We believe PFSS structure can not only provide the benefits of PEC ground but also lower antenna profile.", "title": "" }, { "docid": "neg:1840227_5", "text": "As a consequence of the numerous cyber attacks over the last decade, both the consideration and use of cyberspace has fundamentally changed, and will continue to evolve. Military forces all over the world have come to value the new role of cyberspace in warfare, building up cyber commands, and establishing new capabilities. Integral to such capabilities is that military forces fundamentally depend on the rapid exchange of information in order for their decision-making processes to gain superiority on the battlefield; this compounds the need to develop network- enabled capabilities to realize network-centric warfare. This triangle of cyber offense, cyber defence, and cyber dependence creates a challenging and complex system of interdependencies. Alongside, while numerous technologies have not improved cyber security significantly, this may change with upcoming new concepts and systems, like decentralized ledger technologies (Blockchains) or quantum-secured communication. Following these thoughts, the paper analyses the development of both cyber threats and defence capabilities during the past 10 years, evaluates the current situation and gives recommendations for improvements. To this end, the paper is structured as follows: first, general conditions for military forces with respect to \"cyber\" are described, including an analysis of the most likely courses of action of the West and their seemingly traditional adversary in the East, Russia. The overview includes a discussion of the usefulness of the measures and an overview of upcoming technologies critical for cyber security. Finally, requirements and recommendations for the further development of cyber defence are briefly covered.", "title": "" }, { "docid": "neg:1840227_6", "text": "The maximum clique problem is a well known NP-Hard problem with applications in data mining, network analysis, information retrieval and many other areas related to the World Wide Web. There exist several algorithms for the problem with acceptable runtimes for certain classes of graphs, but many of them are infeasible for massive graphs. We present a new exact algorithm that employs novel pruning techniques and is able to find maximum cliques in very large, sparse graphs quickly. Extensive experiments on different kinds of synthetic and real-world graphs show that our new algorithm can be orders of magnitude faster than existing algorithms. We also present a heuristic that runs orders of magnitude faster than the exact algorithm while providing optimal or near-optimal solutions. We illustrate a simple application of the algorithms in developing methods for detection of overlapping communities in networks.", "title": "" }, { "docid": "neg:1840227_7", "text": "Lateral flow (immuno)assays are currently used for qualitative, semiquantitative and to some extent quantitative monitoring in resource-poor or non-laboratory environments. Applications include tests on pathogens, drugs, hormones and metabolites in biomedical, phytosanitary, veterinary, feed/food and environmental settings. We describe principles of current formats, applications, limitations and perspectives for quantitative monitoring. We illustrate the potentials and limitations of analysis with lateral flow (immuno)assays using a literature survey and a SWOT analysis (acronym for \"strengths, weaknesses, opportunities, threats\"). Articles referred to in this survey were searched for on MEDLINE, Scopus and in references of reviewed papers. Search terms included \"immunochromatography\", \"sol particle immunoassay\", \"lateral flow immunoassay\" and \"dipstick assay\".", "title": "" }, { "docid": "neg:1840227_8", "text": "Ž . Ž PLS-regression PLSR is the PLS approach in its simplest, and in chemistry and technology, most used form two-block . predictive PLS . PLSR is a method for relating two data matrices, X andY, by a linear multivariate model, but goes beyond traditional regression in that it models also the structure of X andY. PLSR derives its usefulness from its ability to analyze data with many, noisy, collinear, and even incomplete variables in both X andY. PLSR has the desirable property that the precision of the model parameters improves with the increasing number of relevant variables and observations. This article reviews PLSR as it has developed to become a standard tool in chemometrics and used in chemistry and engineering. The underlying model and its assumptions are discussed, and commonly used diagnostics are reviewed together with the interpretation of resulting parameters. Ž . Two examples are used as illustrations: First, a Quantitative Structure–Activity Relationship QSAR rQuantitative StrucŽ . ture–Property Relationship QSPR data set of peptides is used to outline how to develop, interpret and refine a PLSR model. Second, a data set from the manufacturing of recycled paper is analyzed to illustrate time series modelling of process data by means of PLSR and time-lagged X-variables. q2001 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840227_9", "text": "This letter presents our initial results in deep learning for channel estimation and signal detection in orthogonal frequency-division multiplexing (OFDM) systems. In this letter, we exploit deep learning to handle wireless OFDM channels in an end-to-end manner. Different from existing OFDM receivers that first estimate channel state information (CSI) explicitly and then detect/recover the transmitted symbols using the estimated CSI, the proposed deep learning-based approach estimates CSI implicitly and recovers the transmitted symbols directly. To address channel distortion, a deep learning model is first trained offline using the data generated from simulation based on channel statistics and then used for recovering the online transmitted data directly. From our simulation results, the deep learning based approach can address channel distortion and detect the transmitted symbols with performance comparable to the minimum mean-square error estimator. Furthermore, the deep learning-based approach is more robust than conventional methods when fewer training pilots are used, the cyclic prefix is omitted, and nonlinear clipping noise exists. In summary, deep learning is a promising tool for channel estimation and signal detection in wireless communications with complicated channel distortion and interference.", "title": "" }, { "docid": "neg:1840227_10", "text": "In this paper we present a novel method for desktop-to-mobile adaptation. The solution also supports end-users in customizing multi-device ubiquitous user interfaces. In particular, we describe an algorithm and the corresponding tool support to perform desktop-to-mobile adaptation by exploiting logical user interface descriptions able to capture interaction semantic information indicating the purpose of the interface elements. We also compare our solution with existing tools for similar goals.", "title": "" }, { "docid": "neg:1840227_11", "text": "Chimeric antigen receptors (CARs) have been used to redirect the specificity of autologous T cells against leukemia and lymphoma with promising clinical results. Extending this approach to allogeneic T cells is problematic as they carry a significant risk of graft-versus-host disease (GVHD). Natural killer (NK) cells are highly cytotoxic effectors, killing their targets in a non-antigen-specific manner without causing GVHD. Cord blood (CB) offers an attractive, allogeneic, off-the-self source of NK cells for immunotherapy. We transduced CB-derived NK cells with a retroviral vector incorporating the genes for CAR-CD19, IL-15 and inducible caspase-9-based suicide gene (iC9), and demonstrated efficient killing of CD19-expressing cell lines and primary leukemia cells in vitro, with marked prolongation of survival in a xenograft Raji lymphoma murine model. Interleukin-15 (IL-15) production by the transduced CB-NK cells critically improved their function. Moreover, iC9/CAR.19/IL-15 CB-NK cells were readily eliminated upon pharmacologic activation of the iC9 suicide gene. In conclusion, we have developed a novel approach to immunotherapy using engineered CB-derived NK cells, which are easy to produce, exhibit striking efficacy and incorporate safety measures to limit toxicity. This approach should greatly improve the logistics of delivering this therapy to large numbers of patients, a major limitation to current CAR-T-cell therapies.", "title": "" }, { "docid": "neg:1840227_12", "text": "The current state of A. D. Baddeley and G. J. Hitch's (1974) multicomponent working memory model is reviewed. The phonological and visuospatial subsystems have been extensively investigated, leading both to challenges over interpretation of individual phenomena and to more detailed attempts to model the processes underlying the subsystems. Analysis of the controlling central executive has proved more challenging, leading to a proposed clarification in which the executive is assumed to be a limited capacity attentional system, aided by a newly postulated fourth system, the episodic buffer. Current interest focuses most strongly on the link between working memory and long-term memory and on the processes allowing the integration of information from the component subsystems. The model has proved valuable in accounting for data from a wide range of participant groups under a rich array of task conditions. Working memory does still appear to be working.", "title": "" }, { "docid": "neg:1840227_13", "text": "PURPOSE\nTo examine the feasibility and preliminary benefits of an integrative cognitive behavioral therapy (CBT) with adolescents with inflammatory bowel disease and anxiety.\n\n\nDESIGN AND METHODS\nNine adolescents participated in a CBT program at their gastroenterologist's office. Structured diagnostic interviews, self-report measures of anxiety and pain, and physician-rated disease severity were collected pretreatment and post-treatment.\n\n\nRESULTS\nPostintervention, 88% of adolescents were treatment responders, and 50% no longer met criteria for their principal anxiety disorder. Decreases were demonstrated in anxiety, pain, and disease severity.\n\n\nPRACTICE IMPLICATIONS\nAnxiety screening and a mental health referral to professionals familiar with medical management issues is important.", "title": "" }, { "docid": "neg:1840227_14", "text": "Complex regional pain syndrome (CRPS) is a chronic, intensified localized pain condition that can affect children and adolescents as well as adults, but is more common among adolescent girls. Symptoms include limb pain; allodynia; hyperalgesia; swelling and/or changes in skin color of the affected limb; dry, mottled skin; hyperhidrosis and trophic changes of the nails and hair. The exact mechanism of CRPS is unknown, although several different mechanisms have been suggested. The diagnosis is clinical, with the aid of the adult criteria for CRPS. Standard care consists of a multidisciplinary approach with the implementation of intensive physical therapy in conjunction with psychological counseling. Pharmacological treatments may aid in reducing pain in order to allow the patient to participate fully in intensive physiotherapy. The prognosis in pediatric CRPS is favorable.", "title": "" }, { "docid": "neg:1840227_15", "text": "This study examined cognitive distortions and coping styles as potential mediators for the effects of mindfulness meditation on anxiety, negative affect, positive affect, and hope in college students. Our pre- and postintervention design had four conditions: control, brief meditation focused on attention, brief meditation focused on loving kindness, and longer meditation combining both attentional and loving kindness aspects of mindfulness. Each group met weekly over the course of a semester. Longer combined meditation significantly reduced anxiety and negative affect and increased hope. Changes in cognitive distortions mediated intervention effects for anxiety, negative affect, and hope. Further research is needed to determine differential effects of types of meditation.", "title": "" }, { "docid": "neg:1840227_16", "text": "Integrative approaches to the study of complex systems demand that one knows the manner in which the parts comprising the system are connected. The structure of the complex network defining the interactions provides insight into the function and evolution of the components of the system. Unfortunately, the large size and intricacy of these networks implies that such insight is usually difficult to extract. Here, we propose a method that allows one to systematically extract and display information contained in complex networks. Specifically, we demonstrate that one can (i) find modules in complex networks and (ii) classify nodes into universal roles according to their pattern of within- and between-module connections. The method thus yields a 'cartographic representation' of complex networks.", "title": "" }, { "docid": "neg:1840227_17", "text": "Multichannel tetrode array recording in awake behaving animals provides a powerful method to record the activity of large numbers of neurons. The power of this method could be extended if further information concerning the intracellular state of the neurons could be extracted from the extracellularly recorded signals. Toward this end, we have simultaneously recorded intracellular and extracellular signals from hippocampal CA1 pyramidal cells and interneurons in the anesthetized rat. We found that several intracellular parameters can be deduced from extracellular spike waveforms. The width of the intracellular action potential is defined precisely by distinct points on the extracellular spike. Amplitude changes of the intracellular action potential are reflected by changes in the amplitude of the initial negative phase of the extracellular spike, and these amplitude changes are dependent on the state of the network. In addition, intracellular recordings from dendrites with simultaneous extracellular recordings from the soma indicate that, on average, action potentials are initiated in the perisomatic region and propagate to the dendrites at 1.68 m/s. Finally we determined that a tetrode in hippocampal area CA1 theoretically should be able to record electrical signals from approximately 1, 000 neurons. Of these, 60-100 neurons should generate spikes of sufficient amplitude to be detectable from the noise and to allow for their separation using current spatial clustering methods. This theoretical maximum is in contrast to the approximately six units that are usually detected per tetrode. From this, we conclude that a large percentage of hippocampal CA1 pyramidal cells are silent in any given behavioral condition.", "title": "" }, { "docid": "neg:1840227_18", "text": "Standard SVM training has O(m3) time andO(m2) space complexities, where m is the training set size. It is thus computationally infeasible on very larg e data sets. By observing that practical SVM implementations onlyapproximatethe optimal solution by an iterative strategy, we scale up kernel methods by exploiting such “approximateness” in t h s paper. We first show that many kernel methods can be equivalently formulated as minimum en closing ball (MEB) problems in computational geometry. Then, by adopting an efficient appr oximate MEB algorithm, we obtain provably approximately optimal solutions with the idea of c re sets. Our proposed Core Vector Machine (CVM) algorithm can be used with nonlinear kernels a nd has a time complexity that is linear in m and a space complexity that is independent of m. Experiments on large toy and realworld data sets demonstrate that the CVM is as accurate as exi sting SVM implementations, but is much faster and can handle much larger data sets than existin g scale-up methods. For example, CVM with the Gaussian kernel produces superior results on th e KDDCUP-99 intrusion detection data, which has about five million training patterns, in only 1.4 seconds on a 3.2GHz Pentium–4 PC.", "title": "" }, { "docid": "neg:1840227_19", "text": "Received: 27 October 2014 Revised: 29 December 2015 Accepted: 12 January 2016 Abstract In information systems (IS) literature, there is ongoing debate as to whether the field has become fragmented and lost its identity in response to the rapid changes of the field. The paper contributes to this discussion by providing quantitative measurement of the fragmentation or cohesiveness level of the field. A co-word analysis approach aiding in visualization of the intellectual map of IS is applied through application of clustering analysis, network maps, strategic diagram techniques, and graph theory for a collection of 47,467 keywords from 9551 articles, published in 10 major IS journals and the proceedings of two leading IS conferences over a span of 20 years, 1993 through 2012. The study identified the popular, core, and bridging topics of IS research for the periods 1993–2002 and 2003–2012. Its results show that research topics and subfields underwent substantial change between those two periods and the field became more concrete and cohesive, increasing in density. Findings from this study suggest that the evolution of the research topics and themes in the IS field should be seen as part of the natural metabolism of the field, rather than a process of fragmentation or disintegration. European Journal of Information Systems advance online publication, 22 March 2016; doi:10.1057/ejis.2016.5", "title": "" } ]
1840228
Bayesian nonparametric sparse seemingly unrelated regression model ( SUR ) ∗
[ { "docid": "pos:1840228_0", "text": "We propose several econometric measures of connectedness based on principalcomponents analysis and Granger-causality networks, and apply them to the monthly returns of hedge funds, banks, broker/dealers, and insurance companies. We find that all four sectors have become highly interrelated over the past decade, likely increasing the level of systemic risk in the finance and insurance industries through a complex and time-varying network of relationships. These measures can also identify and quantify financial crisis periods, and seem to contain predictive power in out-of-sample tests. Our results show an asymmetry in the degree of connectedness among the four sectors, with banks playing a much more important role in transmitting shocks than other financial institutions. & 2011 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "neg:1840228_0", "text": "A new wideband bandpass filter (BPF) with composite short- and open-circuited stubs has been proposed in this letter. With the two kinds of stubs, two pairs of transmission zeros (TZs) can be produced on the two sides of the desired passband. The even-/odd-mode analysis method is used to derive the input admittances of its bisection circuits. After the Richard's transformation, these bisection circuits are in the same format of two LC circuits. By combining these two LC circuits, the equivalent circuit of the proposed filter is obtained. Through the analysis of the equivalent circuit, the open-circuited stubs introduce transmission poles in the complex frequencies and one pair of TZs in the real frequencies, and the short-circuited stubs generate one pair of TZs to block the dc component. A wideband BPF is designed and fabricated to verify the proposed design principle.", "title": "" }, { "docid": "neg:1840228_1", "text": "Maximum power point tracking (MPPT) is a very important necessity in a system of energy conversion from a renewable energy source. Many research papers have been produced with various schemes over past decades for the MPPT in photovoltaic (PV) system. This research paper inspires its motivation from the fact that the keen study of these existing techniques reveals that there is still quite a need for an absolutely generic and yet very simple MPPT controller which should have all the following traits: total independence from system's parameters, ability to reach the global maxima in minimal possible steps, the correct sense of tracking direction despite the abrupt atmospheric or parametrical changes, and finally having a very cost-effective and energy efficient hardware with the complexity no more than that of a minimal MPPT algorithm like Perturb and Observe (P&O). The MPPT controller presented in this paper is a successful attempt to fulfil all these requirements. It extends the MPPT techniques found in the recent research papers with some innovations in the control algorithm and a simplistic hardware. The simulation results confirm that the proposed MPPT controller is very fast, very efficient, very simple and low cost as compared to the contemporary ones.", "title": "" }, { "docid": "neg:1840228_2", "text": "Sophorolipids are biosurfactants belonging to the class of the glycolipid, produced mainly by the osmophilic yeast Candida bombicola. Structurally they are composed by a disaccharide sophorose (2’-O-β-D-glucopyranosyl-β-D-glycopyranose) which is linked β -glycosidically to a long fatty acid chain with generally 16 to 18 atoms of carbon with one or more unsaturation. They are produced as a complex mix containing up to 40 molecules and associated isomers, depending on the species which produces it, the substrate used and the culture conditions. They present properties which are very similar or superior to the synthetic surfactants and other biosurfactants with the advantage of presenting low toxicity, higher biodegradability, better environmental compatibility, high selectivity and specific activity in a broad range of temperature, pH and salinity conditions. Its biological activities are directly related with its chemical structure. Sophorolipids possess a great potential for application in areas such as: food; bioremediation; cosmetics; pharmaceutical; biomedicine; nanotechnology and enhanced oil recovery.", "title": "" }, { "docid": "neg:1840228_3", "text": "In this paper, various key points in the rotor design of a low-cost permanent-magnet-assisted synchronous reluctance motor (PMa-SynRM) are introduced and their effects are studied. Finite-element approach has been utilized to show the effects of these parameters on the developed average electromagnetic torque and total d-q inductances. One of the features considered in the design of this motor is the magnetization of the permanent magnets mounted in the rotor core using the stator windings. This feature will cause a reduction in cost and ease of manufacturing. Effectiveness of the design procedure is validated by presenting simulation and experimental results of a 1.5-kW prototype PMa-SynRM", "title": "" }, { "docid": "neg:1840228_4", "text": "Several studies have found that women tend to demonstrate stronger preferences for masculine men as short-term partners than as long-term partners, though there is considerable variation among women in the magnitude of this effect. One possible source of this variation is individual differences in the extent to which women perceive masculine men to possess antisocial traits that are less costly in short-term relationships than in long-term relationships. Consistent with this proposal, here we show that the extent to which women report stronger preferences for men with low (i.e., masculine) voice pitch as short-term partners than as long-term partners is associated with the extent to which they attribute physical dominance and low trustworthiness to these masculine voices. Thus, our findings suggest that variation in the extent to which women attribute negative personality characteristics to masculine men predicts individual differences in the magnitude of the effect of relationship context on women's masculinity preferences, highlighting the importance of perceived personality attributions for individual differences in women's judgments of men's vocal attractiveness and, potentially, their mate preferences.", "title": "" }, { "docid": "neg:1840228_5", "text": "We present a full-stack design to accelerate deep learning inference with FPGAs. Our contribution is two-fold. At the software layer, we leverage and extend TVM, the end-to-end deep learning optimizing compiler, in order to harness FPGA-based acceleration. At the the hardware layer, we present the Versatile Tensor Accelerator (VTA) which presents a generic, modular, and customizable architecture for TPU-like accelerators. Our results take a ResNet-18 description in MxNet and compiles it down to perform 8-bit inference on a 256-PE accelerator implemented on a low-cost Xilinx Zynq FPGA, clocked at 100MHz. Our full hardware acceleration stack will be made available for the community to reproduce, and build upon at http://github.com/uwsaml/vta.", "title": "" }, { "docid": "neg:1840228_6", "text": "Relation Extraction is an important subtask of Information Extraction which has the potential of employing deep learning (DL) models with the creation of large datasets using distant supervision. In this review, we compare the contributions and pitfalls of the various DL models that have been used for the task, to help guide the path ahead.", "title": "" }, { "docid": "neg:1840228_7", "text": "Van den Ende-Gupta Syndrome (VDEGS) is an autosomal recessive disorder characterized by blepharophimosis, distinctive nose, hypoplastic maxilla, and skeletal abnormalities. Using homozygosity mapping in four VDEGS patients from three consanguineous families, Anastacio et al. [Anastacio et al. (2010); Am J Hum Genet 87:553-559] identified homozygous mutations in SCARF2, located at 22q11.2. Bedeschi et al. [2010] described a VDEGS patient with sclerocornea and cataracts with compound heterozygosity for the common 22q11.2 microdeletion and a hemizygous SCARF2 mutation. Because sclerocornea had been described in DiGeorge-velo-cardio-facial syndrome but not in VDEGS, they suggested that the ocular abnormalities were caused by the 22q11.2 microdeletion. We report on a 23-year-old male who presented with bilateral sclerocornea and the VDGEGS phenotype who was subsequently found to be homozygous for a 17 bp deletion in exon 4 of SCARF2. The occurrence of bilateral sclerocornea in our patient together with that of Bedeschi et al., suggests that the full VDEGS phenotype may include sclerocornea resulting from homozygosity or compound heterozygosity for loss of function variants in SCARF2.", "title": "" }, { "docid": "neg:1840228_8", "text": "There is no doubt that Social media has gained wider acceptability and usability and is also becoming probably the most important communication tools among students especially at the higher level of educational pursuit. As much as social media is viewed as having bridged the gap in communication that existed. Within the social media Facebook, Twitter and others are now gaining more and more patronage. These websites and social forums are way of communicating directly with other people socially. Social media has the potentials of influencing decision-making in a very short time regardless of the distance. On the bases of its influence, benefits and demerits this study is carried out in order to highlight the potentials of social media in the academic setting by collaborative learning and improve the students' academic performance. The results show that collaborative learning positively and significantly with interactive with peers, interactive with teachers and engagement which impact the students’ academic performance.", "title": "" }, { "docid": "neg:1840228_9", "text": "As manga (Japanese comics) have become common content in many countries, it is necessary to search manga by text query or translate them automatically. For these applications, we must first extract texts from manga. In this paper, we develop a method to detect text regions in manga. Taking motivation from methods used in scene text detection, we propose an approach using classifiers for both connected components and regions. We have also developed a text region dataset of manga, which enables learning and detailed evaluations of methods used to detect text regions. Experiments using the dataset showed that our text detection method performs more effectively than existing methods.", "title": "" }, { "docid": "neg:1840228_10", "text": "This letter presents a design of a dual-orthogonal, linear polarized antenna for the UWB-IR technology in the frequency range from 3.1 to 10.6 GHz. The antenna is compact with dimensions of 40 times 40 mm of the radiation plane, which is orthogonal to the radiation direction. Both the antenna and the feeding network are realized in planar technology. The radiation principle and the computed design are verified by a prototype. The input impedance matching is better than -6 dB. The measured results show a mean gain in copolarization close to 4 dBi. The cross-polarizations suppression w.r.t. the copolarization is better than 20 dB. Due to its features, the antenna is suited for polarimetric ultrawideband (UWB) radar and UWB multiple-input-multiple-output (MIMO) applications.", "title": "" }, { "docid": "neg:1840228_11", "text": "With near exponential growth predicted in the number of Internet of Things (IoT) based devices within networked systems there is need of a means of providing their flexible and secure integration. Software Defined Networking (SDN) is a concept that allows for the centralised control and configuration of network devices, and also provides opportunities for the dynamic control of network traffic. This paper proposes the use of an SDN gateway as a distributed means of monitoring the traffic originating from and directed to IoT based devices. This gateway can then both detect anomalous behaviour and perform an appropriate response (blocking, forwarding, or applying Quality of Service). Initial results demonstrate that, while the addition of the attack detection functionality has an impact on the number of flow installations possible per second, it can successfully detect and block TCP and ICMP flood based attacks.", "title": "" }, { "docid": "neg:1840228_12", "text": "The Human Papillomavirus (HPV) E6 protein is one of three oncoproteins encoded by the virus. It has long been recognized as a potent oncogene and is intimately associated with the events that result in the malignant conversion of virally infected cells. In order to understand the mechanisms by which E6 contributes to the development of human malignancy many laboratories have focused their attention on identifying the cellular proteins with which E6 interacts. In this review we discuss these interactions in the light of their respective contributions to the malignant progression of HPV transformed cells.", "title": "" }, { "docid": "neg:1840228_13", "text": "The effects of caffeine on cognition were reviewed based on the large body of literature available on the topic. Caffeine does not usually affect performance in learning and memory tasks, although caffeine may occasionally have facilitatory or inhibitory effects on memory and learning. Caffeine facilitates learning in tasks in which information is presented passively; in tasks in which material is learned intentionally, caffeine has no effect. Caffeine facilitates performance in tasks involving working memory to a limited extent, but hinders performance in tasks that heavily depend on working memory, and caffeine appears to rather improve memory performance under suboptimal alertness conditions. Most studies, however, found improvements in reaction time. The ingestion of caffeine does not seem to affect long-term memory. At low doses, caffeine improves hedonic tone and reduces anxiety, while at high doses, there is an increase in tense arousal, including anxiety, nervousness, jitteriness. The larger improvement of performance in fatigued subjects confirms that caffeine is a mild stimulant. Caffeine has also been reported to prevent cognitive decline in healthy subjects but the results of the studies are heterogeneous, some finding no age-related effect while others reported effects only in one sex and mainly in the oldest population. In conclusion, it appears that caffeine cannot be considered a ;pure' cognitive enhancer. Its indirect action on arousal, mood and concentration contributes in large part to its cognitive enhancing properties.", "title": "" }, { "docid": "neg:1840228_14", "text": "PURPOSE\nWe define the cause of the occurrence of Peyronie's disease.\n\n\nMATERIALS AND METHODS\nClinical evaluation of a large number of patients with Peyronie's disease, while taking into account the pathological and biochemical findings of the penis in patients who have been treated by surgery, has led to an understanding of the relationship of the anatomical structure of the penis to its rigidity during erection, and how the effect of the stress imposed upon those structures during intercourse is modified by the loss of compliance resulting from aging of the collagen composing those structures. Peyronie's disease occurs most frequently in middle-aged men, less frequently in older men and infrequently in younger men who have more elastic tissues. During erection, when full tumescence has occurred and the elastic tissues of the penis have reached the limit of their compliance, the strands of the septum give vertical rigidity to the penis. Bending the erect penis out of column stresses the attachment of the septal strands to the tunica albuginea.\n\n\nRESULTS\nPlaques of Peyronie's disease are found where the strands of the septum are attached in the dorsal or ventral aspect of the penis. The pathological scar in the tunica albuginea of the corpora cavernosa in Peyronie's disease is characterized by excessive collagen accumulation, fibrin deposition and disordered elastic fibers in the plaque.\n\n\nCONCLUSIONS\nWe suggest that Peyronie's disease results from repetitive microvascular injury, with fibrin deposition and trapping in the tissue space that is not adequately cleared during the normal remodeling and repair of the tear in the tunica. Fibroblast activation and proliferation, enhanced vessel permeability and generation of chemotactic factors for leukocytes are stimulated by fibrin deposited in the normal process of wound healing. However, in Peyronie's disease the lesion fails to resolve either due to an inability to clear the original stimulus or due to further deposition of fibrin subsequent to repeated trauma. Collagen is also trapped and pathological fibrosis ensues.", "title": "" }, { "docid": "neg:1840228_15", "text": "UNLABELLED\nOBJECTIVES. Understanding the factors that promote quality of life in old age has been a staple of social gerontology since its inception and remains a significant theme in aging research. The purpose of this article was to review the state of the science with regard to subjective well-being (SWB) in later life and to identify promising directions for future research.\n\n\nMETHODS\nThis article is based on a review of literature on SWB in aging, sociological, and psychological journals. Although the materials reviewed date back to the early 1960s, the emphasis is on publications in the past decade.\n\n\nRESULTS\nResearch to date paints an effective portrait of the epidemiology of SWB in late life and the factors associated with it. Although the research base is large, causal inferences about the determinants of SWB remain problematic. Two recent contributions to the research base are highlighted as emerging issues: studies of secular trends in SWB and cross-national studies. Discussion. The review ends with discussion of priority issues for future research.", "title": "" }, { "docid": "neg:1840228_16", "text": "A resonant current-fed push-pull converter with active clamping is investigated for vehicle inverter application in this paper. A communal clamping capacitor and a voltage doubler topology are employed. Moreover, two kinds of resonances to realize ZVS ON of MOSFETs or to remove reverse-recovery problem of secondary diodes are analyzed and an optimized resonance is utilized for volume restriction. At last, a 150W prototype with 10V to 16V input voltage, 360V output voltage is implemented to verify the design.", "title": "" }, { "docid": "neg:1840228_17", "text": "This paper proposes a novel mains voltage proportional input current control concept eliminating the multiplication of the output voltage controller output and the mains ac phase voltages for the derivation of mains phase current reference values of a three-phase/level/switch pulsewidth-modulated (VIENNA) rectifier system. Furthermore, the concept features low input current ripple amplitude as, e.g., achieved for space-vector modulation, a low amplitude of the third harmonic of the current flowing into the output voltage center point, and a wide range of modulation. The practical realization of the analog control concept as well as experimental results for application with a 5-kW prototype of the pulsewidth-modulated rectifier are presented. Furthermore, a control scheme which relies only on the absolute values of the input phase currents and a modified control scheme which does not require information about the mains phase voltages are presented.", "title": "" }, { "docid": "neg:1840228_18", "text": "This paper presents a systematic approach to transform various fault models to a unified model such that all faults of interest can be handled in one ATPG run. The fault models that can be transformed include, but are not limited to, stuck-at faults, various types of bridging faults, and cell-internal faults. The unified model is the aggressor-victim type of bridging fault model. Two transformation methods, namely fault-based and pattern-based transformations, are developed for cell-external and cell-internal faults, respectively. With the proposed approach, one can use an ATPG tool for bridging faults to deal with the test generation problems of multiple fault models simultaneously. Hence the total test generation time can be reduced and highly compact test sets can be obtained. Experimental results show that on average 54.94% (16.45%) and 47.22% (17.51%) test pattern volume reductions are achieved compared to the method that deals with the three fault models separately without (with) fault dropping for ISCAS'89 andIWLS'05 circuits, respectively.", "title": "" }, { "docid": "neg:1840228_19", "text": "AIM\nInternet gaming disorder (IGD) is a serious disorder leading to and maintaining pertinent personal and social impairment. IGD has to be considered in view of heterogeneous and incomplete concepts. We therefore reviewed the scientific literature on IGD to provide an overview focusing on definitions, symptoms, prevalence, and aetiology.\n\n\nMETHOD\nWe systematically reviewed the databases ERIC, PsyARTICLES, PsycINFO, PSYNDEX, and PubMed for the period January 1991 to August 2016, and additionally identified secondary references.\n\n\nRESULTS\nThe proposed definition in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition provides a good starting point for diagnosing IGD but entails some disadvantages. Developing IGD requires several interacting internal factors such as deficient self, mood and reward regulation, problems of decision-making, and external factors such as deficient family background and social skills. In addition, specific game-related factors may promote IGD. Summarizing aetiological knowledge, we suggest an integrated model of IGD elucidating the interplay of internal and external factors.\n\n\nINTERPRETATION\nSo far, the concept of IGD and the pathways leading to it are not entirely clear. In particular, long-term follow-up studies are missing. IGD should be understood as an endangering disorder with a complex psychosocial background.\n\n\nWHAT THIS PAPER ADDS\nIn representative samples of children and adolescents, on average, 2% are affected by Internet gaming disorder (IGD). The mean prevalences (overall, clinical samples included) reach 5.5%. Definitions are heterogeneous and the relationship with substance-related addictions is inconsistent. Many aetiological factors are related to the development and maintenance of IGD. This review presents an integrated model of IGD, delineating the interplay of these factors.", "title": "" } ]
1840229
Risk assessment model selection in construction industry
[ { "docid": "pos:1840229_0", "text": "This study simplifies the complicated metric distance method [L.S. Chen, C.H. Cheng, Selecting IS personnel using ranking fuzzy number by metric distance method, Eur. J. Operational Res. 160 (3) 2005 803–820], and proposes an algorithm to modify Chen’s Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) [C.T. Chen, Extensions of the TOPSIS for group decision-making under fuzzy environment, Fuzzy Sets Syst., 114 (2000) 1–9]. From experimental verification, Chen directly assigned the fuzzy numbers 1̃ and 0̃ as fuzzy positive ideal solution (PIS) and negative ideal solution (NIS). Chen’s method sometimes violates the basic concepts of traditional TOPSIS. This study thus proposes fuzzy hierarchical TOPSIS, which not only is well suited for evaluating fuzziness and uncertainty problems, but also can provide more objective and accurate criterion weights, while simultaneously avoiding the problem of Chen’s Fuzzy TOPSIS. For application and verification, this study presents a numerical example and build a practical supplier selection problem to verify our proposed method and compare it with other methods. 2008 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +886 5 5342601x5312; fax: +886 5 531 2077. E-mail addresses: jwwang@mail.nhu.edu.tw (J.-W. Wang), chcheng@yuntech.edu.tw (C.-H. Cheng), lendlice@ms12.url.com.tw (K.-C. Huang).", "title": "" } ]
[ { "docid": "neg:1840229_0", "text": "In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.", "title": "" }, { "docid": "neg:1840229_1", "text": "Natural language generators for taskoriented dialogue must effectively realize system dialogue actions and their associated semantics. In many applications, it is also desirable for generators to control the style of an utterance. To date, work on task-oriented neural generation has primarily focused on semantic fidelity rather than achieving stylistic goals, while work on style has been done in contexts where it is difficult to measure content preservation. Here we present three different sequence-to-sequence models and carefully test how well they disentangle content and style. We use a statistical generator, PERSONAGE, to synthesize a new corpus of over 88,000 restaurant domain utterances whose style varies according to models of personality, giving us total control over both the semantic content and the stylistic variation in the training data. We then vary the amount of explicit stylistic supervision given to the three models. We show that our most explicit model can simultaneously achieve high fidelity to both semantic and stylistic goals: this model adds a context vector of 36 stylistic parameters as input to the hidden state of the encoder at each time step, showing the benefits of explicit stylistic supervision, even when the amount of training data is large.", "title": "" }, { "docid": "neg:1840229_2", "text": "Public opinion polarization is here conceived as a process of alignment along multiple lines of potential disagreement and measured as growing constraint in individuals' preferences. Using NES data from 1972 to 2004, the authors model trends in issue partisanship-the correlation of issue attitudes with party identification-and issue alignment-the correlation between pairs of issues-and find a substantive increase in issue partisanship, but little evidence of issue alignment. The findings suggest that opinion changes correspond more to a resorting of party labels among voters than to greater constraint on issue attitudes: since parties are more polarized, they are now better at sorting individuals along ideological lines. Levels of constraint vary across population subgroups: strong partisans and wealthier and politically sophisticated voters have grown more coherent in their beliefs. The authors discuss the consequences of partisan realignment and group sorting on the political process and potential deviations from the classic pluralistic account of American politics.", "title": "" }, { "docid": "neg:1840229_3", "text": "A key issue in face recognition is to seek an effective descriptor for representing face appearance. In the context of considering the face image as a set of small facial regions, this paper presents a new face representation approach coined spatial feature interdependence matrix (SFIM). Unlike classical face descriptors which usually use a hierarchically organized or a sequentially concatenated structure to describe the spatial layout features extracted from local regions, SFIM is attributed to the exploitation of the underlying feature interdependences regarding local region pairs inside a class specific face. According to SFIM, the face image is projected onto an undirected connected graph in a manner that explicitly encodes feature interdependence-based relationships between local regions. We calculate the pair-wise interdependence strength as the weighted discrepancy between two feature sets extracted in a hybrid feature space fusing histograms of intensity, local binary pattern and oriented gradients. To achieve the goal of face recognition, our SFIM-based face descriptor is embedded in three different recognition frameworks, namely nearest neighbor search, subspace-based classification, and linear optimization-based classification. Extensive experimental results on four well-known face databases and comprehensive comparisons with the state-of-the-art results are provided to demonstrate the efficacy of the proposed SFIM-based descriptor.", "title": "" }, { "docid": "neg:1840229_4", "text": "Conventional material removal techniques, like CNC milling, have been proven to be able to tackle nearly any machining challenge. On the other hand, the major drawback of using conventional CNC machines is the restricted working area and their produced shape limitation limitations. From a conceptual point of view, industrial robot technology could provide an excellent base for machining being both flexible and cost efficient. However, industrial machining robots lack absolute positioning accuracy, are unable to reject/absorb disturbances in terms of process forces and lack reliable programming and simulation tools to ensure right first time machining, at production startups. This paper reviews the penetration of industrial robots in the challenging field of machining.", "title": "" }, { "docid": "neg:1840229_5", "text": "The value of involving people as ‘users’ or ‘participants’ in the design process is increasingly becoming a point of debate. In this paper we describe a new framework, called ‘informant design’, which advocates efficiency of input from different people: maximizing the value of contributions tlom various informants and design team members at different stages of the design process. To illustrate how this can be achieved we describe a project that uses children and teachers as informants at difTerent stages to help us design an interactive learning environment for teaching ecology.", "title": "" }, { "docid": "neg:1840229_6", "text": "We consider the analysis of Electroencephalography (EEG) and Local Field Potential (LFP) datasets, which are “big” in terms of the size of recorded data but rarely have sufficient labels required to train complex models (e.g., conventional deep learning methods). Furthermore, in many scientific applications, the goal is to be able to understand the underlying features related to the classification, which prohibits the blind application of deep networks. This motivates the development of a new model based on parameterized convolutional filters guided by previous neuroscience research; the filters learn relevant frequency bands while targeting synchrony, which are frequency-specific power and phase correlations between electrodes. This results in a highly expressive convolutional neural network with only a few hundred parameters, applicable to smaller datasets. The proposed approach is demonstrated to yield competitive (often state-of-the-art) predictive performance during our empirical tests while yielding interpretable features. Furthermore, a Gaussian process adapter is developed to combine analysis over distinct electrode layouts, allowing the joint processing of multiple datasets to address overfitting and improve generalizability. Finally, it is demonstrated that the proposed framework effectively tracks neural dynamics on children in a clinical trial on Autism Spectrum Disorder.", "title": "" }, { "docid": "neg:1840229_7", "text": "Sparse deep neural networks(DNNs) are efficient in both memory and compute when compared to dense DNNs. But due to irregularity in computation of sparse DNNs, their efficiencies are much lower than that of dense DNNs on general purpose hardwares. This leads to poor/no performance benefits for sparse DNNs. Performance issue for sparse DNNs can be alleviated by bringing structure to the sparsity and leveraging it for improving runtime efficiency. But such structural constraints often lead to sparse models with suboptimal accuracies. In this work, we jointly address both accuracy and performance of sparse DNNs using our proposed class of neural networks called HBsNN (Hierarchical Block sparse Neural Networks).", "title": "" }, { "docid": "neg:1840229_8", "text": "We develop efficient solution methods for a robust empirical risk minimization problem designed to give calibrated confidence intervals on performance and provide optimal tradeoffs between bias and variance. Our methods apply to distributionally robust optimization problems proposed by Ben-Tal et al., which put more weight on observations inducing high loss via a worst-case approach over a non-parametric uncertainty set on the underlying data distribution. Our algorithm solves the resulting minimax problems with nearly the same computational cost of stochastic gradient descent through the use of several carefully designed data structures. For a sample of size n, the per-iteration cost of our method scales as O(log n), which allows us to give optimality certificates that distributionally robust optimization provides at little extra cost compared to empirical risk minimization and stochastic gradient methods.", "title": "" }, { "docid": "neg:1840229_9", "text": "A common discussion subject for the male part of the population in particular, is the prediction of next weekend’s soccer matches, especially for the local team. Knowledge of offensive and defensive skills is valuable in the decision process before making a bet at a bookmaker. In this article we take an applied statistician’s approach to the problem, suggesting a Bayesian dynamic generalised linear model to estimate the time dependent skills of all teams in a league, and to predict next weekend’s soccer matches. The problem is more intricate than it may appear at first glance, as we need to estimate the skills of all teams simultaneously as they are dependent. It is now possible to deal with such inference problems using the iterative simulation technique known as Markov Chain Monte Carlo. We will show various applications of the proposed model based on the English Premier League and Division 1 1997-98; Prediction with application to betting, retrospective analysis of the final ranking, detection of surprising matches and how each team’s properties vary during the season.", "title": "" }, { "docid": "neg:1840229_10", "text": "SIMON (Signal Interpretation and MONitoring) continuously collects, permanently stores, and processes bedside medical device data. Since 1998 SIMON has monitored over 3500 trauma intensive care unit (TICU) patients, representing approximately 250,000 hours of continuous monitoring and two billion data points, and is currently operational on all 14 TICU beds at Vanderbilt University Medical Center. This repository of dense physiologic data (heart rate, arterial, pulmonary, central venous, intracranial, and cerebral perfusion pressures, arterial and venous oxygen saturations, and other parameters sampled second-by-second) supports research to identify “new vital signs” features of patient physiology only observable through dense data capture and analysis more predictive of patient status than current measures. SIMON’s alerting and reporting capabilities, including web-based display, sentinel event notification via alphanumeric pagers, and daily summary reports of vital sign statistics, allow these discoveries to be rapidly tested and implemented in a working clinical environment. This", "title": "" }, { "docid": "neg:1840229_11", "text": "Text mining and information retrieval in large collections of scientific literature require automated processing systems that analyse the documents’ content. However, the layout of scientific articles is highly varying across publishers, and common digital document formats are optimised for presentation, but lack structural information. To overcome these challenges, we have developed a processing pipeline that analyses the structure a PDF document using a number of unsupervised machine learning techniques and heuristics. Apart from the meta-data extraction, which we reused from previous work, our system uses only information available from the current document and does not require any pre-trained model. First, contiguous text blocks are extracted from the raw character stream. Next, we determine geometrical relations between these blocks, which, together with geometrical and font information, are then used categorize the blocks into different classes. Based on this resulting logical structure we finally extract the body text and the table of contents of a scientific article. We separately evaluate the individual stages of our pipeline on a number of different datasets and compare it with other document structure analysis approaches. We show that it outperforms a state-of-the-art system in terms of the quality of the extracted body text and table of contents. Our unsupervised approach could provide a basis for advanced digital library scenarios that involve diverse and dynamic corpora.", "title": "" }, { "docid": "neg:1840229_12", "text": "The main purpose of the present study is to help managers cope with the negative effects of technostress on employee use of ICT. Drawing on transaction theory of stress (Cooper, Dewe, & O’Driscoll, 2001) and information systems (IS) continuance theory (Bhattacherjee, 2001) we investigate the effects of technostress on employee intentions to extend the use of ICT at work. Our results show that factors that create and inhibit technostress affect both employee satisfaction with the use of ICT and employee intentions to extend the use of ICT. Our findings have important implications for the management of technostress with regard to both individual stress levels and organizational performance. A key implication of our research is that managers should implement strategies for coping with technostress through the theoretical concept of technostress inhibitors. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840229_13", "text": "Named Entity Recognition (NER) is the task of classifying or labelling atomic elements in the text into categories such as Person, Location or Organisation. For Arabic language, recognizing named entities is a challenging task because of the complexity and the unique characteristics of this language. In addition, most of the previous work focuses on Modern Standard Arabic (MSA), however, recognizing named entities in social media is becoming more interesting these days. Dialectal Arabic (DA) and MSA are both used in social media, which is deemed as another challenging task. Most state-of-the-art Arabic NER systems count heavily on hand-crafted engineering features and lexicons which is time consuming. In this paper, we introduce a novel neural network architecture which benefits both from characterand word-level representations automatically, by using combination of bidirectional Long Short-Term Memory (LSTM) and Conditional Random Field (CRF), eliminating the need for most feature engineering. Moreover, our model relies on unsupervised word representations learned from unannotated corpora. Experimental results demonstrate that our model achieves state-of-the-art performance on publicly available benchmark for Arabic NER for social media and surpassing the previous system by a large margin.", "title": "" }, { "docid": "neg:1840229_14", "text": "Human papillomavirus infection can cause a variety of benign or malignant oral lesions, and the various genotypes can cause distinct types of lesions. To our best knowledge, there has been no report of 2 different human papillomavirus-related oral lesions in different oral sites in the same patient before. This paper reported a patient with 2 different oral lesions which were clinically and histologically in accord with focal epithelial hyperplasia and oral papilloma, respectively. Using DNA extracted from these 2 different lesions, tissue blocks were tested for presence of human papillomavirus followed by specific polymerase chain reaction testing for 6, 11, 13, 16, 18, and 32 subtypes in order to confirm the clinical diagnosis. Finally, human papillomavirus-32-positive focal epithelial hyperplasia accompanying human papillomavirus-16-positive oral papilloma-like lesions were detected in different sites of the oral mucosa. Nucleotide sequence sequencing further confirmed the results. So in our clinical work, if the simultaneous occurrences of different human papillomavirus associated lesions are suspected, the multiple biopsies from different lesions and detection of human papillomavirus genotype are needed to confirm the diagnosis.", "title": "" }, { "docid": "neg:1840229_15", "text": "Weather forecasting has been one of the most scientifically and technologically challenging problem around the world. Weather data is one of the meteorological data that is rich with important information, which can be used for weather prediction We extract knowledge from weather historical data collected from Indian Meteorological Department (IMD) Pune. From the collected weather data comprising of 36 attributes, only 7 attributes are most relevant to rainfall prediction. We made data preprocessing and data transformation on raw weather data set, so that it shall be possible to work on Bayesian, the data mining, prediction model used for rainfall prediction. The model is trained using the training data set and has been tested for accuracy on available test data. The meteorological centers uses high performance computing and supercomputing power to run weather prediction model. To address the issue of compute intensive rainfall prediction model, we proposed and implemented data intensive model using data mining technique. Our model works with good accuracy and takes moderate compute resources to predict the rainfall. We have used Bayesian approach to prove our model for rainfall prediction, and found to be working well with good accuracy.", "title": "" }, { "docid": "neg:1840229_16", "text": "Cellular electron cryo-tomography enables the 3D visualization of cellular organization in the near-native state and at submolecular resolution. However, the contents of cellular tomograms are often complex, making it difficult to automatically isolate different in situ cellular components. In this paper, we propose a convolutional autoencoder-based unsupervised approach to provide a coarse grouping of 3D small subvolumes extracted from tomograms. We demonstrate that the autoencoder can be used for efficient and coarse characterization of features of macromolecular complexes and surfaces, such as membranes. In addition, the autoencoder can be used to detect non-cellular features related to sample preparation and data collection, such as carbon edges from the grid and tomogram boundaries. The autoencoder is also able to detect patterns that may indicate spatial interactions between cellular components. Furthermore, we demonstrate that our autoencoder can be used for weakly supervised semantic segmentation of cellular components, requiring a very small amount of manual annotation.", "title": "" }, { "docid": "neg:1840229_17", "text": "This paper presents a model which generates architectural layout for a single flat having regular shaped spaces; Bedroom, Bathroom, Kitchen, Balcony, Living and Dining Room. Using constraints at two levels; Topological (Adjacency, Compactness, Vaastu, and Open and Closed face constraints) and Dimensional (Length to Width ratio constraint), Genetic Algorithms have been used to generate the topological arrangement of spaces in the layout and further if required, feasibility have been dimensionally analyzed. Further easy evacuation form the selected layout in case of adversity has been proposed using Dijkstra's Algorithm. Later the proposed model has been tested for efficiency using various test cases. This paper also presents a classification and categorization of various problems of space planning.", "title": "" }, { "docid": "neg:1840229_18", "text": "Intrinsically motivated goal exploration processes enable agents to autonomously sample goals to explore efficiently complex environments with high-dimensional continuous actions. They have been applied successfully to real world robots to discover repertoires of policies producing a wide diversity of effects. Often these algorithms relied on engineered goal spaces but it was recently shown that one can use deep representation learning algorithms to learn an adequate goal space in simple environments. However, in the case of more complex environments containing multiple objects or distractors, an efficient exploration requires that the structure of the goal space reflects the one of the environment. In this paper we show that using a disentangled goal space leads to better exploration performances than an entangled one. We further show that when the representation is disentangled, one can leverage it by sampling goals that maximize learning progress in a modular manner. Finally, we show that the measure of learning progress, used to drive curiosity-driven exploration, can be used simultaneously to discover abstract independently controllable features of the environment. The code used in the experiments is available at https://github.com/flowersteam/ Curiosity_Driven_Goal_Exploration.", "title": "" }, { "docid": "neg:1840229_19", "text": "We present a new, \"greedy\", channel-router that is quick, simple, and highly effective. It always succeeds, usually using no more than one track more than required by channel density. (It may be forced in rare cases to make a few connections \"off the end\" of the channel, in order to succeed.) It assumes that all pins and wiring lie on a common grid, and that vertical wires are on one layer, horizontal on another. The greedy router wires up the channel in a left-to-right, column-by-column manner, wiring each column completely before starting the next. Within each column the router tries to maximize the utility of the wiring produced, using simple, \"greedy\" heuristics. It may place a net on more than one track for a few columns, and \"collapse\" the net to a single track later on, using a vertical jog. It may also use a jog to move a net to a track closer to its pin in some future column. The router may occasionally add a new track to the channel, to avoid \"getting stuck\".", "title": "" } ]
1840230
Distributed Agile Development: Using Scrum in a Large Project
[ { "docid": "pos:1840230_0", "text": "Agile processes rely on feedback and communication to work and they often work best with co-located teams for this reason. Sometimes agile makes sense because of project requirements and a distributed team makes sense because of resource constraints. A distributed team can be productive and fun to work on if the team takes an active role in overcoming the barriers that distribution causes. This is the story of how one product team used creativity, communications tools, and basic good engineering practices to build a successful product.", "title": "" } ]
[ { "docid": "neg:1840230_0", "text": "OBJECTIVE\nTo compare the healing at elevated sinus floors augmented either with deproteinized bovine bone mineral (DBBM) or autologous bone grafts and followed by immediate implant installation.\n\n\nMATERIAL AND METHODS\nTwelve albino New Zealand rabbits were used. Incisions were performed along the midline of the nasal dorsum. The nasal bone was exposed. A circular bony widow with a diameter of 3 mm was prepared bilaterally, and the sinus mucosa was detached. Autologous bone (AB) grafts were collected from the tibia. Similar amounts of AB or DBBM granules were placed below the sinus mucosa. An implant with a moderately rough surface was installed into the elevated sinus bilaterally. The animals were sacrificed after 7 (n = 6) or 40 days (n = 6).\n\n\nRESULTS\nThe dimensions of the elevated sinus space at the DBBM sites were maintained, while at the AB sites, a loss of 2/3 was observed between 7 and 40 days of healing. The implants showed similar degrees of osseointegration after 7 (7.1 ± 1.7%; 9.9 ± 4.5%) and 40 days (37.8 ± 15%; 36.0 ± 11.4%) at the DBBM and AB sites, respectively. Similar amounts of newly formed mineralized bone were found in the elevated space after 7 days at the DBBM (7.8 ± 6.6%) and AB (7.2 ± 6.0%) sites while, after 40 days, a higher percentage of bone was found at AB (56.7 ± 8.8%) compared to DBBM (40.3 ± 7.5%) sites.\n\n\nCONCLUSIONS\nBoth Bio-Oss® granules and autologous bone grafts contributed to the healing at implants installed immediately in elevated sinus sites in rabbits. Bio-Oss® maintained the dimensions, while autologous bone sites lost 2/3 of the volume between the two periods of observation.", "title": "" }, { "docid": "neg:1840230_1", "text": "The advent of social networks poses severe threats on user privacy as adversaries can de-anonymize users' identities by mapping them to correlated cross-domain networks. Without ground-truth mapping, prior literature proposes various cost functions in hope of measuring the quality of mappings. However, there is generally a lacking of rationale behind the cost functions, whose minimizer also remains algorithmically unknown. We jointly tackle above concerns under a more practical social network model parameterized by overlapping communities, which, neglected by prior art, can serve as side information for de-anonymization. Regarding the unavailability of ground-truth mapping to adversaries, by virtue of the Minimum Mean Square Error (MMSE), our first contribution is a well-justified cost function minimizing the expected number of mismatched users over all possible true mappings. While proving the NP-hardness of minimizing MMSE, we validly transform it into the weighted-edge matching problem (WEMP), which, as disclosed theoretically, resolves the tension between optimality and complexity: (i) WEMP asymptotically returns a negligible mapping error in large network size under mild conditions facilitated by higher overlapping strength; (ii) WEMP can be algorithmically characterized via the convex-concave based de-anonymization algorithm (CBDA), finding the optimum of WEMP. Extensive experiments further confirm the effectiveness of CBDA under overlapping communities, in terms of averagely 90% re-identified users in the rare true cross-domain co-author networks when communities overlap densely, and roughly 70% enhanced reidentification ratio compared to non-overlapping cases.", "title": "" }, { "docid": "neg:1840230_2", "text": "In this paper we propose a new semi-supervised GAN architecture (ss-InfoGAN) for image synthesis that leverages information from few labels (as little as 0.22%, max. 10% of the dataset) to learn semantically meaningful and controllable data representations where latent variables correspond to label categories. The architecture builds on Information Maximizing Generative Adversarial Networks (InfoGAN) and is shown to learn both continuous and categorical codes and achieves higher quality of synthetic samples compared to fully unsupervised settings. Furthermore, we show that using small amounts of labeled data speeds-up training convergence. The architecture maintains the ability to disentangle latent variables for which no labels are available. Finally, we contribute an information-theoretic reasoning on how introducing semi-supervision increases mutual information between synthetic and real data.", "title": "" }, { "docid": "neg:1840230_3", "text": "This brief presents an approach for identifying the parameters of linear time-varying systems that repeat their trajectories. The identification is based on the concept that parameter identification results can be improved by incorporating information learned from previous executions. The learning laws for this iterative learning identification are determined through an optimization framework. The convergence analysis of the algorithm is presented along with the experimental results to demonstrate its effectiveness. The algorithm is demonstrated to be capable of simultaneously estimating rapidly varying parameters and addressing robustness to noise by adopting a time-varying design approach.", "title": "" }, { "docid": "neg:1840230_4", "text": "Automatic speaker recognition is a field of study attributed in identifying a person from a spoken phrase. The technique makes it possible to use the speaker’s voice to verify their identity and control access to the services such as biometric security system, voice dialing, telephone banking, telephone shopping, database access services, information services, voice mail, and security control for confidential information areas and remote access to the computers. This thesis represents a development of a Matlab based text dependent speaker recognition system. Mel Frequency Cepstrum Coefficient (MFCC) Method is used to extract a speaker’s discriminative features from the mathematical representation of the speech signal. After that Vector Quantization with VQ-LBG Algorithm is used to match the feature. Key-Words: Speaker Recognition, Human Speech Signal Processing, Vector Quantization", "title": "" }, { "docid": "neg:1840230_5", "text": "We aim to develop a computationally feasible, cognitivelyinspired, formal model of concept invention, drawing on Fauconnier and Turner’s theory of conceptual blending, and grounding it on a sound mathematical theory of concepts. Conceptual blending, although successfully applied to describing combinational creativity in a varied number of fields, has barely been used at all for implementing creative computational systems, mainly due to the lack of sufficiently precise mathematical characterisations thereof. The model we will define will be based on Goguen’s proposal of a Unified Concept Theory, and will draw from interdisciplinary research results from cognitive science, artificial intelligence, formal methods and computational creativity. To validate our model, we will implement a proof of concept of an autonomous computational creative system that will be evaluated in two testbed scenarios: mathematical reasoning and melodic harmonisation. We envisage that the results of this project will be significant for gaining a deeper scientific understanding of creativity, for fostering the synergy between understanding and enhancing human creativity, and for developing new technologies for autonomous creative systems.", "title": "" }, { "docid": "neg:1840230_6", "text": "One of the most important challenges of this decade is the Internet of Things (IoT), which aims to enable things to be connected anytime, anyplace, with anything and anyone, ideally using any path/network and any service. IoT systems are usually composed of heterogeneous and interconnected lightweight devices that support applications that are subject to change in their external environment and in the functioning of these devices. The management of the variability of these changes, autonomously, is a challenge in the development of these systems. Agents are a good option for developing self-managed IoT systems due to their distributed nature, context-awareness and self-adaptation. Our goal is to enhance the development of IoT applications using agents and software product lines (SPL). Specifically, we propose to use Self-StarMASMAS, multi-agent system) agents and to define an SPL process using the Common Variability Language. In this contribution, we propose an SPL process for Self-StarMAS, paying particular attention to agents embedded in sensor motes.", "title": "" }, { "docid": "neg:1840230_7", "text": "Optimizing the operation of cooperative multi-robot systems that can cooperatively act in large and complex environments has become an important focal area of research. This issue is motivated by many applications involving a set of cooperative robots that have to decide in a decentralized way how to execute a large set of tasks in partially observable and uncertain environments. Such decision problems are encountered while developing exploration rovers, team of patrolling robots, rescue-robot colonies, mine-clearance robots, etc. In this chapter, we introduce problematics related to the decentralized control of multi-robot systems. We first describe some applicative domains and review the main characteristics of the decision problems the robots must deal with. Then, we review some existing approaches to solve problems of multiagent decentralized control in stochastic environments. We present the Decentralized Markov Decision Processes and discuss their applicability to real-world multi-robot applications. Then, we introduce OC-DEC-MDPs and 2V-DEC-MDPs which have been developed to increase the applicability of DEC-MDPs.", "title": "" }, { "docid": "neg:1840230_8", "text": "Botanical insecticides presently play only a minor role in insect pest management and crop protection; increasingly stringent regulatory requirements in many jurisdictions have prevented all but a handful of botanical products from reaching the marketplace in North America and Europe in the past 20 years. Nonetheless, the regulatory environment and public health needs are creating opportunities for the use of botanicals in industrialized countries in situations where human and animal health are foremost--for pest control in and around homes and gardens, in commercial kitchens and food storage facilities and on companion animals. Botanicals may also find favour in organic food production, both in the field and in controlled environments. In this review it is argued that the greatest benefits from botanicals might be achieved in developing countries, where human pesticide poisonings are most prevalent. Recent studies in Africa suggest that extracts of locally available plants can be effective as crop protectants, either used alone or in mixtures with conventional insecticides at reduced rates. These studies suggest that indigenous knowledge and traditional practice can make valuable contributions to domestic food production in countries where strict enforcement of pesticide regulations is impractical.", "title": "" }, { "docid": "neg:1840230_9", "text": "Persuasiveness is a high-level personality trait that quantifies the influence a speaker has on the beliefs, attitudes, intentions, motivations, and behavior of the audience. With social multimedia becoming an important channel in propagating ideas and opinions, analyzing persuasiveness is very important. In this work, we use the publicly available Persuasive Opinion Multimedia (POM) dataset to study persuasion. One of the challenges associated with this problem is the limited amount of annotated data. To tackle this challenge, we present a deep multimodal fusion architecture which is able to leverage complementary information from individual modalities for predicting persuasiveness. Our methods show significant improvement in performance over previous approaches.", "title": "" }, { "docid": "neg:1840230_10", "text": "In this article, we examine the impact of digital screen devices, including television, on cognitive development. Although we know that young infants and toddlers are using touch screen devices, we know little about their comprehension of the content that they encounter on them. In contrast, research suggests that children begin to comprehend child-directed television starting at ∼2 years of age. The cognitive impact of these media depends on the age of the child, the kind of programming (educational programming versus programming produced for adults), the social context of viewing, as well the particular kind of interactive media (eg, computer games). For children <2 years old, television viewing has mostly negative associations, especially for language and executive function. For preschool-aged children, television viewing has been found to have both positive and negative outcomes, and a large body of research suggests that educational television has a positive impact on cognitive development. Beyond the preschool years, children mostly consume entertainment programming, and cognitive outcomes are not well explored in research. The use of computer games as well as educational computer programs can lead to gains in academically relevant content and other cognitive skills. This article concludes by identifying topics and goals for future research and provides recommendations based on current research-based knowledge.", "title": "" }, { "docid": "neg:1840230_11", "text": "The concept of agile domain name system (DNS) refers to dynamic and rapidly changing mappings between domain names and their Internet protocol (IP) addresses. This empirical paper evaluates the bias from this kind of agility for DNS-based graph theoretical data mining applications. By building on two conventional metrics for observing malicious DNS agility, the agility bias is observed by comparing bipartite DNS graphs to different subgraphs from which vertices and edges are removed according to two criteria. According to an empirical experiment with two longitudinal DNS datasets, irrespective of the criterion, the agility bias is observed to be severe particularly regarding the effect of outlying domains hosted and delivered via content delivery networks and cloud computing services. With these observations, the paper contributes to the research domains of cyber security and DNS mining. In a larger context of applied graph mining, the paper further elaborates the practical concerns related to the learning of large and dynamic bipartite graphs.", "title": "" }, { "docid": "neg:1840230_12", "text": "Optimal tuning of proportional-integral-derivative (PID) controller parameters is necessary for the satisfactory operation of automatic voltage regulator (AVR) system. This study presents a tuning fuzzy logic approach to determine the optimal PID controller parameters in AVR system. The developed fuzzy system can give the PID parameters on-line for different operating conditions. The suitability of the proposed approach for PID controller tuning has been demonstrated through computer simulations in an AVR system.", "title": "" }, { "docid": "neg:1840230_13", "text": "Nanoscale drug delivery systems using liposomes and nanoparticles are emerging technologies for the rational delivery of chemotherapeutic drugs in the treatment of cancer. Their use offers improved pharmacokinetic properties, controlled and sustained release of drugs and, more importantly, lower systemic toxicity. The commercial availability of liposomal Doxil and albumin-nanoparticle-based Abraxane has focused attention on this innovative and exciting field. Recent advances in liposome technology offer better treatment of multidrug-resistant cancers and lower cardiotoxicity. Nanoparticles offer increased precision in chemotherapeutic targeting of prostate cancer and new avenues for the treatment of breast cancer. Here we review current knowledge on the two technologies and their potential applications to cancer treatment.", "title": "" }, { "docid": "neg:1840230_14", "text": "Real-world environments are typically dynamic, complex, and multisensory in nature and require the support of top–down attention and memory mechanisms for us to be able to drive a car, make a shopping list, or pour a cup of coffee. Fundamental principles of perception and functional brain organization have been established by research utilizing well-controlled but simplified paradigms with basic stimuli. The last 30 years ushered a revolution in computational power, brain mapping, and signal processing techniques. Drawing on those theoretical and methodological advances, over the years, research has departed more and more from traditional, rigorous, and well-understood paradigms to directly investigate cognitive functions and their underlying brain mechanisms in real-world environments. These investigations typically address the role of one or, more recently, multiple attributes of real-world environments. Fundamental assumptions about perception, attention, or brain functional organization have been challenged—by studies adapting the traditional paradigms to emulate, for example, the multisensory nature or varying relevance of stimulation or dynamically changing task demands. Here, we present the state of the field within the emerging heterogeneous domain of real-world neuroscience. To be precise, the aim of this Special Focus is to bring together a variety of the emerging “real-world neuroscientific” approaches. These approaches differ in their principal aims, assumptions, or even definitions of “real-world neuroscience” research. Here, we showcase the commonalities and distinctive features of the different “real-world neuroscience” approaches. To do so, four early-career researchers and the speakers of the Cognitive Neuroscience Society 2017 Meeting symposium under the same title answer questions pertaining to the added value of such approaches in bringing us closer to accurate models of functional brain organization and cognitive functions.", "title": "" }, { "docid": "neg:1840230_15", "text": "Autonomous Land Vehicles (ALVs), due to their considerable potential applications in areas such as mining and defence, are currently the focus of intense research at robotics institutes worldwide. Control systems that provide reliable navigation, often in complex or previously unknown environments, is a core requirement of any ALV implementation. Three key aspects for the provision of such autonomous systems are: 1) path planning, 2) obstacle avoidance, and 3) path following. The work presented in this thesis, under the general umbrella of the ACFR’s own ALV project, the ‘High Speed Vehicle Project’, addresses these three mobile robot competencies in the context of an ALV based system. As such, it develops both the theoretical concepts and the practical components to realise an initial, fully functional implementation of such a system. This system, which is implemented on the ACFR’s (ute) test vehicle, allows the user to enter a trajectory and follow it, while avoiding any detected obstacles along the path.", "title": "" }, { "docid": "neg:1840230_16", "text": "Printflatables is a design and fabrication system for human-scale, functional and dynamic inflatable objects. We use inextensible thermoplastic fabric as the raw material with the key principle of introducing folds and thermal sealing. Upon inflation, the sealed object takes the expected three dimensional shape. The workflow begins with the user specifying an intended 3D model which is decomposed to two dimensional fabrication geometry. This forms the input for a numerically controlled thermal contact iron that seals layers of thermoplastic fabric. In this paper, we discuss the system design in detail, the pneumatic primitives that this technique enables and merits of being able to make large, functional and dynamic pneumatic artifacts. We demonstrate the design output through multiple objects which could motivate fabrication of inflatable media and pressure-based interfaces.", "title": "" }, { "docid": "neg:1840230_17", "text": "This paper presents an efficient metric for quantifying the visual fidelity of natural images based on near-threshold and suprathreshold properties of human vision. The proposed metric, the visual signal-to-noise ratio (VSNR), operates via a two-stage approach. In the first stage, contrast thresholds for detection of distortions in the presence of natural images are computed via wavelet-based models of visual masking and visual summation in order to determine whether the distortions in the distorted image are visible. If the distortions are below the threshold of detection, the distorted image is deemed to be of perfect visual fidelity (VSNR = infin)and no further analysis is required. If the distortions are suprathreshold, a second stage is applied which operates based on the low-level visual property of perceived contrast, and the mid-level visual property of global precedence. These two properties are modeled as Euclidean distances in distortion-contrast space of a multiscale wavelet decomposition, and VSNR is computed based on a simple linear sum of these distances. The proposed VSNR metric is generally competitive with current metrics of visual fidelity; it is efficient both in terms of its low computational complexity and in terms of its low memory requirements; and it operates based on physical luminances and visual angle (rather than on digital pixel values and pixel-based dimensions) to accommodate different viewing conditions.", "title": "" }, { "docid": "neg:1840230_18", "text": "A 64% instantaneous bandwidth scalable turnstile-based orthomode transducer to be used in the so-called extended C-band satellite link is presented. The proposed structure overcomes the current practical bandwidth limitations by adding a single-step widening at the junction of the four output rectangular waveguides. This judicious modification, together with the use of reduced-height waveguides and E-plane bends and power combiners, enables to approach the theoretical structure bandwidth limit with a simple, scalable and compact design. The presented orthomode transducer architecture exhibits a return loss better than 25 dB, an isolation between rectangular ports better than 50 dB and a transmission loss less than 0.04 dB in the 3.6-7 GHz range, which represents state-of-the-art achievement in terms of bandwidth.", "title": "" }, { "docid": "neg:1840230_19", "text": "The complexity in electrical power systems is continuously increasing due to its advancing distribution. This affects the topology of the grid infrastructure as well as the IT-infrastructure, leading to various heterogeneous systems, data models, protocols, and interfaces. This in turn raises the need for appropriate processes and tools that facilitate the management of the systems architecture on different levels and from different stakeholders’ view points. In order to achieve this goal, a common understanding of architecture elements and means of classification shall be gained. The Smart Grid Architecture Model (SGAM) proposed in context of the European standardization mandate M/490 provides a promising basis for domainspecific architecture models. The idea of following a Model-Driven-Architecture (MDA)-approach to create such models, including requirements specification based on Smart Grid use cases, is detailed in this contribution. The SGAM-Toolbox is introduced as tool-support for the approach and its utility is demonstrated by two real-world case studies.", "title": "" } ]
1840231
Implications of Placebo and Nocebo Effects for Clinical Practice: Expert Consensus.
[ { "docid": "pos:1840231_0", "text": "Placebo effects are beneficial effects that are attributable to the brain–mind responses to the context in which a treatment is delivered rather than to the specific actions of the drug. They are mediated by diverse processes — including learning, expectations and social cognition — and can influence various clinical and physiological outcomes related to health. Emerging neuroscience evidence implicates multiple brain systems and neurochemical mediators, including opioids and dopamine. We present an empirical review of the brain systems that are involved in placebo effects, focusing on placebo analgesia, and a conceptual framework linking these findings to the mind–brain processes that mediate them. This framework suggests that the neuropsychological processes that mediate placebo effects may be crucial for a wide array of therapeutic approaches, including many drugs.", "title": "" } ]
[ { "docid": "neg:1840231_0", "text": "Within the philosophy of language, pragmatics has tended to be seen as an adjunct to, and a means of solving problems in, semantics. A cognitive-scientific conception of pragmatics as a mental processing system responsible for interpreting ostensive communicative stimuli (specifically, verbal utterances) has effected a transformation in the pragmatic issues pursued and the kinds of explanation offered. Taking this latter perspective, I compare two distinct proposals on the kinds of processes, and the architecture of the system(s), responsible for the recovery of speaker meaning (both explicitly and implicitly communicated meaning). 1. Pragmatics as a Cognitive System 1.1 From Philosophy of Language to Cognitive Science Broadly speaking, there are two perspectives on pragmatics: the ‘philosophical’ and the ‘cognitive’. From the philosophical perspective, an interest in pragmatics has been largely motivated by problems and issues in semantics. A familiar instance of this was Grice’s concern to maintain a close semantic parallel between logical operators and their natural language counterparts, such as ‘not’, ‘and’, ‘or’, ‘if’, ‘every’, ‘a/some’, and ‘the’, in the face of what look like quite major divergences in the meaning of the linguistic elements (see Grice 1975, 1981). The explanation he provided was pragmatic, i.e. in terms of what occurs when the logical semantics of these terms is put to rational communicative use. Consider the case of ‘and’: (1) a. Mary went to a movie and Sam read a novel. b. She gave him her key and he opened the door. c. She insulted him and he left the room. While (a) seems to reflect the straightforward truth-functional symmetrical connection, (b) and (c) communicate a stronger asymmetric relation: temporal Many thanks to Richard Breheny, Sam Guttenplan, Corinne Iten, Deirdre Wilson and Vladimir Zegarac for helpful comments and support during the writing of this paper. Address for correspondence: Department of Phonetics & Linguistics, University College London, Gower Street, London WC1E 6BT, UK. Email: robyn linguistics.ucl.ac.uk Mind & Language, Vol. 17 Nos 1 and 2 February/April 2002, pp. 127–148.  Blackwell Publishers Ltd. 2002, 108 Cowley Road, Oxford, OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA.", "title": "" }, { "docid": "neg:1840231_1", "text": "Novelty search is a recent artificial evolution technique that challenges traditional evolutionary approaches. In novelty search, solutions are rewarded based on their novelty, rather than their quality with respect to a predefined objective. The lack of a predefined objective precludes premature convergence caused by a deceptive fitness function. In this paper, we apply novelty search combined with NEAT to the evolution of neural controllers for homogeneous swarms of robots. Our empirical study is conducted in simulation, and we use a common swarm robotics task—aggregation, and a more challenging task—sharing of an energy recharging station. Our results show that novelty search is unaffected by deception, is notably effective in bootstrapping evolution, can find solutions with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task. Even in non-deceptive setups, novelty search achieves solution qualities similar to those obtained in traditional fitness-based evolution. Our study also encompasses variants of novelty search that work in concert with fitness-based evolution to combine the exploratory character of novelty search with the exploitatory character of objective-based evolution. We show that these variants can further improve the performance of novelty search. Overall, our study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms.", "title": "" }, { "docid": "neg:1840231_2", "text": "Artificial food colors (AFCs) have not been established as the main cause of attention-deficit hyperactivity disorder (ADHD), but accumulated evidence suggests that a subgroup shows significant symptom improvement when consuming an AFC-free diet and reacts with ADHD-type symptoms on challenge with AFCs. Of children with suspected sensitivities, 65% to 89% reacted when challenged with at least 100 mg of AFC. Oligoantigenic diet studies suggested that some children in addition to being sensitive to AFCs are also sensitive to common nonsalicylate foods (milk, chocolate, soy, eggs, wheat, corn, legumes) as well as salicylate-containing grapes, tomatoes, and orange. Some studies found \"cosensitivity\" to be more the rule than the exception. Recently, 2 large studies demonstrated behavioral sensitivity to AFCs and benzoate in children both with and without ADHD. A trial elimination diet is appropriate for children who have not responded satisfactorily to conventional treatment or whose parents wish to pursue a dietary investigation.", "title": "" }, { "docid": "neg:1840231_3", "text": "The advent use of new online social media such as articles, blogs, message boards, news channels, and in general Web content has dramatically changed the way people look at various things around them. Today, it’s a daily practice for many people to read news online. People's perspective tends to undergo a change as per the news content they read. The majority of the content that we read today is on the negative aspects of various things e.g. corruption, rapes, thefts etc. Reading such news is spreading negativity amongst the people. Positive news seems to have gone into a hiding. The positivity surrounding the good news has been drastically reduced by the number of bad news. This has made a great practical use of Sentiment Analysis and there has been more innovation in this area in recent days. Sentiment analysis refers to a broad range of fields of text mining, natural language processing and computational linguistics. It traditionally emphasizes on classification of text document into positive and negative categories. Sentiment analysis of any text document has emerged as the most useful application in the area of sentiment analysis. The objective of this project is to provide a platform for serving good news and create a positive environment. This is achieved by finding the sentiments of the news articles and filtering out the negative articles. This would enable us to focus only on the good news which will help spread positivity around and would allow people to think positively.", "title": "" }, { "docid": "neg:1840231_4", "text": "We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a representation of the state of the world as it receives new data. For language understanding tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory locations can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children’s Book Test, where it obtains competitive performance, reading the story in a single pass.", "title": "" }, { "docid": "neg:1840231_5", "text": "We present a method for generating colored 3D shapes from natural language. To this end, we first learn joint embeddings of freeform text descriptions and colored 3D shapes. Our model combines and extends learning by association and metric learning approaches to learn implicit cross-modal connections, and produces a joint representation that captures the many-to-many relations between language and physical properties of 3D shapes such as color and shape. To evaluate our approach, we collect a large dataset of natural language descriptions for physical 3D objects in the ShapeNet dataset. With this learned joint embedding we demonstrate text-to-shape retrieval that outperforms baseline approaches. Using our embeddings with a novel conditional Wasserstein GAN framework, we generate colored 3D shapes from text. Our method is the first to connect natural language text with realistic 3D objects exhibiting rich variations in color, texture, and shape detail.", "title": "" }, { "docid": "neg:1840231_6", "text": "The system includes terminal fingerprint acquisitio n module and attendance module. It can realize automatically such functions as information acquisi tion of fingerprint, processing, and wireless trans mission, fingerprint matching and making an attendance repor t. After taking the attendance, this system sends t he attendance of every student to their parent’s mobil e through GSM and also stored the attendance of res pective student to calculate the percentage of attendance a d alerts to class in charge. Attendance system fac ilitates access to the attendance of a particular student in a particular class. This system eliminates the nee d for stationary materials and personnel for the keeping of records and efforts of class in charge.", "title": "" }, { "docid": "neg:1840231_7", "text": "The Industry 4.0 is a vision that includes connecting more intensively physical systems with their virtual counterparts in computers. This computerization of manufacturing will bring many advantages, including allowing data gathering, integration and analysis in the scale not seen earlier. In this paper we describe our Semantic Big Data Historian that is intended to handle large volumes of heterogeneous data gathered from distributed data sources. We describe the approach and implementation with a special focus on using Semantic Web technologies for integrating the data.", "title": "" }, { "docid": "neg:1840231_8", "text": "Traceability allows tracking products through all stages of a supply chain, which is crucial for product quality control. To provide accountability and forensic information, traceability information must be secured. This is challenging because traceability systems often must adapt to changes in regulations and to customized traceability inspection processes. OriginChain is a real-world traceability system using a blockchain. Blockchains are an emerging data storage technology that enables new forms of decentralized architectures. Components can agree on their shared states without trusting a central integration point. OriginChain’s architecture provides transparent tamper-proof traceability information, automates regulatory compliance checking, and enables system adaptability.", "title": "" }, { "docid": "neg:1840231_9", "text": "Knowledge graph (KG) completion aims to fill the missing facts in a KG, where a fact is represented as a triple in the form of (subject, relation, object). Current KG completion models compel twothirds of a triple provided (e.g., subject and relation) to predict the remaining one. In this paper, we propose a new model, which uses a KGspecific multi-layer recurrent neutral network (RNN) to model triples in a KG as sequences. It outperformed several state-of-the-art KG completion models on the conventional entity prediction task for many evaluation metrics, based on two benchmark datasets and a more difficult dataset. Furthermore, our model is enabled by the sequential characteristic and thus capable of predicting the whole triples only given one entity. Our experiments demonstrated that our model achieved promising performance on this new triple prediction task.", "title": "" }, { "docid": "neg:1840231_10", "text": "Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies.", "title": "" }, { "docid": "neg:1840231_11", "text": "This paper presents an algorithm for calibrating strapdown magnetometers in the magnetic field domain. In contrast to the traditional method of compass swinging, which computes a series of heading correction parameters and, thus, is limited to use with two-axis systems, this algorithm estimates magnetometer output errors directly. Therefore, this new algorithm can be used to calibrate a full three-axis magnetometer triad. The calibration algorithm uses an iterated, batch least squares estimator which is initialized using a novel two-step nonlinear estimator. The algorithm is simulated to validate convergence characteristics and further validated on experimental data collected using a magnetometer triad. It is shown that the post calibration residuals are small and result in a system with heading errors on the order of 1 to 2 degrees.", "title": "" }, { "docid": "neg:1840231_12", "text": "We generalize the scattering transform to graphs and consequently construct a convolutional neural network on graphs. We show that under certain conditions, any feature generated by such a network is approximately invariant to permutations and stable to graph manipulations. Numerical results demonstrate competitive performance on relevant datasets.", "title": "" }, { "docid": "neg:1840231_13", "text": "We describe how malicious customers can attack the availability of Content Delivery Networks (CDNs) by creating forwarding loops inside one CDN or across multiple CDNs. Such forwarding loops cause one request to be processed repeatedly or even indefinitely, resulting in undesired resource consumption and potential Denial-of-Service attacks. To evaluate the practicality of such forwarding-loop attacks, we examined 16 popular CDN providers and found all of them are vulnerable to some form of such attacks. While some CDNs appear to be aware of this threat and have adopted specific forwarding-loop detection mechanisms, we discovered that they can all be bypassed with new attack techniques. Although conceptually simple, a comprehensive defense requires collaboration among all CDNs. Given that hurdle, we also discuss other mitigations that individual CDN can implement immediately. At a higher level, our work underscores the hazards that can arise when a networked system provides users with control over forwarding, particularly in a context that lacks a single point of administrative control.", "title": "" }, { "docid": "neg:1840231_14", "text": "OBJECTIVE\nThe objective of this study was to investigate the relationship between breakfast type, energy intake and body mass index (BMI). We hypothesized not only that breakfast consumption itself is associated with BMI, but that the type of food eaten at breakfast also affects BMI.\n\n\nMETHODS\nData from the Third National Health and Nutrition Examination Survey (NHANES III), a large, population-based study conducted in the United States from 1988 to 1994, were analyzed for breakfast type, total daily energy intake, and BMI. The analyzed breakfast categories were \"Skippers,\" \"Meat/eggs,\" \"Ready-to-eat cereal (RTEC),\" \"Cooked cereal,\" \"Breads,\" \"Quick Breads,\" \"Fruits/vegetables,\" \"Dairy,\" \"Fats/sweets,\" and \"Beverages.\" Analysis of covariance was used to estimate adjusted mean body mass index (BMI) and energy intake (kcal) as dependent variables. Covariates included age, gender, race, smoking, alcohol intake, physical activity and poverty index ratio.\n\n\nRESULTS\nSubjects who ate RTEC, Cooked cereal, or Quick Breads for breakfast had significantly lower BMI compared to Skippers and Meat and Egg eaters (p < or = 0.01). Breakfast skippers and fruit/vegetable eaters had the lowest daily energy intake. The Meat and Eggs eaters had the highest daily energy intake and one of the highest BMIs.\n\n\nCONCLUSIONS\nThis analysis provides evidence that skipping breakfast is not an effective way to manage weight. Eating cereal (ready-to-eat or cooked cereal) or quick breads for breakfast is associated with significantly lower body mass index compared to skipping breakfast or eating meats and/or eggs for breakfast.", "title": "" }, { "docid": "neg:1840231_15", "text": "Fischler PER •Sequence of tokens mapped to word embeddings. •Bidirectional LSTM builds context-dependent representations for each word. •A small feedforward layer encourages generalisation. •Conditional Random Field (CRF) at the top outputs the most optimal label sequence for the sentence. •Using character-based dynamic embeddings (Rei et al., 2016) to capture morphological patterns and unseen words.", "title": "" }, { "docid": "neg:1840231_16", "text": "Visual Question Answering (VQA) is a popular research problem that involves inferring answers to natural language questions about a given visual scene. Recent neural network approaches to VQA use attention to select relevant image features based on the question. In this paper, we propose a novel Dual Attention Network (DAN) that not only attends to image features, but also to question features. The selected linguistic and visual features are combined by a recurrent model to infer the final answer. We experiment with different question representations and do several ablation studies to evaluate the model on the challenging VQA dataset.", "title": "" }, { "docid": "neg:1840231_17", "text": "Photogrammetry is the traditional method of surface reconstruction such as the generation of DTMs. Recently, LIDAR emerged as a new technology for rapidly capturing data on physical surfaces. The high accuracy and automation potential results in a quick delivery of DEMs/DTMs derived from the raw laser data. The two methods deliver complementary surface information. Thus it makes sense to combine data from the two sensors to arrive at a more robust and complete surface reconstruction. This paper describes two aspects of merging aerial imagery and LIDAR data. The establishment of a common reference frame is an absolute prerequisite. We solve this alignment problem by utilizing sensor-invariant features. Such features correspond to the same object space phenomena, for example to breaklines and surface patches. Matched sensor invariant features lend themselves to establishing a common reference frame. Feature-level fusion is performed with sensor specific features that are related to surface characteristics. We show the synergism between these features resulting in a richer and more abstract surface description.", "title": "" }, { "docid": "neg:1840231_18", "text": "Although scholars agree that moral emotions are critical for deterring unethical and antisocial behavior, there is disagreement about how 2 prototypical moral emotions--guilt and shame--should be defined, differentiated, and measured. We addressed these issues by developing a new assessment--the Guilt and Shame Proneness scale (GASP)--that measures individual differences in the propensity to experience guilt and shame across a range of personal transgressions. The GASP contains 2 guilt subscales that assess negative behavior-evaluations and repair action tendencies following private transgressions and 2 shame subscales that assess negative self-evaluations (NSEs) and withdrawal action tendencies following publically exposed transgressions. Both guilt subscales were highly correlated with one another and negatively correlated with unethical decision making. Although both shame subscales were associated with relatively poor psychological functioning (e.g., neuroticism, personal distress, low self-esteem), they were only weakly correlated with one another, and their relationships with unethical decision making diverged. Whereas shame-NSE constrained unethical decision making, shame-withdraw did not. Our findings suggest that differentiating the tendency to make NSEs following publically exposed transgressions from the tendency to hide or withdraw from public view is critically important for understanding and measuring dispositional shame proneness. The GASP's ability to distinguish these 2 classes of responses represents an important advantage of the scale over existing assessments. Although further validation research is required, the present studies are promising in that they suggest the GASP has the potential to be an important measurement tool for detecting individuals susceptible to corruption and unethical behavior.", "title": "" }, { "docid": "neg:1840231_19", "text": "The explosive growth of the world-wide-web and the emergence of e-commerce has led to the development of recommender systems--a personalized information filtering technology used to identify a set of N items that will be of interest to a certain user. User-based and model-based collaborative filtering are the most successful technology for building recommender systems to date and is extensively used in many commercial recommender systems. The basic assumption in these algorithms is that there are sufficient historical data for measuring similarity between products or users. However, this assumption does not hold in various application domains such as electronics retail, home shopping network, on-line retail where new products are introduced and existing products disappear from the catalog. Another such application domains is home improvement retail industry where a lot of products (such as window treatments, bathroom, kitchen or deck) are custom made. Each product is unique and there are very little duplicate products. In this domain, the probability of the same exact two products bought together is close to zero. In this paper, we discuss the challenges of providing recommendation in the domains where no sufficient historical data exist for measuring similarity between products or users. We present feature-based recommendation algorithms that overcome the limitations of the existing top-n recommendation algorithms. The experimental evaluation of the proposed algorithms in the real life data sets shows a great promise. The pilot project deploying the proposed feature-based recommendation algorithms in the on-line retail web site shows 75% increase in the recommendation revenue for the first 2 month period.", "title": "" } ]
1840232
Learning and example selection for object and pattern detection
[ { "docid": "pos:1840232_0", "text": "A method is presented for the representation of (pictures of) faces. Within a specified framework the representation is ideal. This results in the characterization of a face, to within an error bound, by a relatively low-dimensional vector. The method is illustrated in detail by the use of an ensemble of pictures taken for this purpose.", "title": "" }, { "docid": "pos:1840232_1", "text": "We address the problem of learning an unknown function by pu tting together several pieces of information (hints) that we know about the function. We introduce a method that generalizes learning from examples to learning from hints. A canonical representation of hints is defined and illustrated for new types of hints. All the hints are represented to the learning process by examples, and examples of the function are treated on equal footing with the rest of the hints. During learning, examples from different hints are selected for processing according to a given schedule. We present two types of schedules; fixed schedules that specify the relative emphasis of each hint, and adaptive schedules that are based on how well each hint has been learned so far. Our learning method is compatible with any descent technique that we may choose to use.", "title": "" } ]
[ { "docid": "neg:1840232_0", "text": "In this paper we review masterpieces of curved crease folding, the deployed design methods and geometric studies on this special kind of paper folding. Our goal is to make this work and its techniques accessible to enable further development. By exploring masterpieces of the past and present of this still underexplored field, this paper aims to contribute to the development of novel design methods to achieve shell structures and deployable structures that can accommodate structural properties and design intention for a wide range of materials.", "title": "" }, { "docid": "neg:1840232_1", "text": "We define a simply typed, non-deterministic lambda-calculus where isomorphic types are equated. To this end, an equivalence relation is settled at the term level. We then provide a proof of strong normalisation modulo equivalence. Such a proof is a non-trivial adaptation of the reducibility method.", "title": "" }, { "docid": "neg:1840232_2", "text": "Fingerprints are the oldest and most widely used form of biometric identification. Everyone is known to have unique, immutable fingerprints. As most Automatic Fingerprint Recognition Systems are based on local ridge features known as minutiae, marking minutiae accurately and rejecting false ones is very important. However, fingerprint images get degraded and corrupted due to variations in skin and impression conditions. Thus, image enhancement techniques are employed prior to minutiae extraction. A critical step in automatic fingerprint matching is to reliably extract minutiae from the input fingerprint images. This paper presents a review of a large number of techniques present in the literature for extracting fingerprint minutiae. The techniques are broadly classified as those working on binarized images and those that work on gray scale images directly.", "title": "" }, { "docid": "neg:1840232_3", "text": "In this paper, we have presented a comprehensive multi-view geometry library, Theia, that focuses on large-scale SfM. In addition to state-of-the-art scalable SfM pipelines, the library provides numerous tools that are useful for students, researchers, and industry experts in the field of multi-view geometry. Theia contains clean code that is well documented (with code comments and the website) and easy to extend. The modular design allows for users to easily implement and experiment with new algorithms within our current pipeline without having to implement a full end-to-end SfM pipeline themselves. Theia has already gathered a large number of diverse users from universities, startups, and industry and we hope to continue to gather users and active contributors from the open-source community.", "title": "" }, { "docid": "neg:1840232_4", "text": "This paper presents a novel approach to visualizing the time structure of musical waveforms. The acoustic similarity between any two instants of an audio recording is displayed in a static 2D representation, which makes structural and rhythmic characteristics visible. Unlike practically all prior work, this method characterizes self-similarity rather than specific audio attributes such as pitch or spectral features. Examples are presented for classical and popular music.", "title": "" }, { "docid": "neg:1840232_5", "text": "Decimal hardware arithmetic units have recently regained popularity, as there is now a high demand for high performance decimal arithmetic. We propose a novel method for carry-free addition of decimal numbers, where each equally weighted decimal digit pair of the two operands is partitioned into two weighted bit-sets. The arithmetic values of these bit-sets are evaluated, in parallel, for fast computation of the transfer digit and interim sum. In the proposed fully redundant adder (VS semi-redundant ones such as decimal carry-save adders) both operands and sum are redundant decimal numbers with overloaded decimal digit set [0, 15]. This adder is shown to improve upon the latest high performance similar works and outperform all the previous alike adders. However, there is a drawback that the adder logic cannot be efficiently adapted for subtraction. Nevertheless, this adder and its restricted-input varieties are shown to efficiently fit in the design of a parallel decimal multiplier. The two-to-one partial product reduction ratio that is attained via the proposed adder has lead to a VLSI-friendly recursive partial product reduction tree. Two alternative architectures for decimal multipliers are presented; one is slower, but area-improved, and the other one consumes more area, but is delay-improved. However, both are faster in comparison with previously reported parallel decimal multipliers. The area and latency comparisons are based on logical effort analysis under the same assumptions for all the evaluated adders and multipliers. Moreover, performance correctness of all the adders is checked via running exhaustive tests on the corresponding VHDL codes. For more reliable evaluation, we report the result of synthesizing these adders by Synopsys Design Compiler using TSMC 0.13mm standard CMOS process under various time constrains. & 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840232_6", "text": "CONTEXT\nPressure ulcers are common in a variety of patient settings and are associated with adverse health outcomes and high treatment costs.\n\n\nOBJECTIVE\nTo systematically review the evidence examining interventions to prevent pressure ulcers.\n\n\nDATA SOURCES AND STUDY SELECTION\nMEDLINE, EMBASE, and CINAHL (from inception through June 2006) and Cochrane databases (through issue 1, 2006) were searched to identify relevant randomized controlled trials (RCTs). UMI Proquest Digital Dissertations, ISI Web of Science, and Cambridge Scientific Abstracts were also searched. All searches used the terms pressure ulcer, pressure sore, decubitus, bedsore, prevention, prophylactic, reduction, randomized, and clinical trials. Bibliographies of identified articles were further reviewed.\n\n\nDATA SYNTHESIS\nFifty-nine RCTs were selected. Interventions assessed in these studies were grouped into 3 categories, ie, those addressing impairments in mobility, nutrition, or skin health. Methodological quality for the RCTs was variable and generally suboptimal. Effective strategies that addressed impaired mobility included the use of support surfaces, mattress overlays on operating tables, and specialized foam and specialized sheepskin overlays. While repositioning is a mainstay in most pressure ulcer prevention protocols, there is insufficient evidence to recommend specific turning regimens for patients with impaired mobility. In patients with nutritional impairments, dietary supplements may be beneficial. The incremental benefit of specific topical agents over simple moisturizers for patients with impaired skin health is unclear.\n\n\nCONCLUSIONS\nGiven current evidence, using support surfaces, repositioning the patient, optimizing nutritional status, and moisturizing sacral skin are appropriate strategies to prevent pressure ulcers. Although a number of RCTs have evaluated preventive strategies for pressure ulcers, many of them had important methodological limitations. There is a need for well-designed RCTs that follow standard criteria for reporting nonpharmacological interventions and that provide data on cost-effectiveness for these interventions.", "title": "" }, { "docid": "neg:1840232_7", "text": "Nobody likes performance reviews. Subordinates are terrified they'll hear nothing but criticism. Bosses think their direct reports will respond to even the mildest criticism with anger or tears. The result? Everyone keeps quiet. That's unfortunate, because most people need help figuring out how to improve their performance and advance their careers. This fear of feedback doesn't come into play just during annual reviews. At least half the executives with whom the authors have worked never ask for feedback. Many expect the worst: heated arguments, even threats of dismissal. So rather than seek feedback, people try to guess what their bosses are thinking. Fears and assumptions about feedback often manifest themselves in psychologically maladaptive behaviors such as procrastination, denial, brooding, jealousy, and self-sabotage. But there's hope, say the authors. Those who learn adaptive techniques can free themselves from destructive responses. They'll be able to deal with feedback better if they acknowledge negative emotions, reframe fear and criticism constructively, develop realistic goals, create support systems, and reward themselves for achievements along the way. Once you've begun to alter your maladaptive behaviors, you can begin seeking regular feedback from your boss. The authors take you through four steps for doing just that: self-assessment, external assessment, absorbing the feedback, and taking action toward change. Organizations profit when employees ask for feedback and deal well with criticism. Once people begin to know how they are doing relative to management's priorities, their work becomes better aligned with organizational goals. What's more, they begin to transform a feedback-averse environment into a more honest and open one, in turn improving performance throughout the organization.", "title": "" }, { "docid": "neg:1840232_8", "text": "This article first presents two theories representing distinct approaches to the field of stress research: Selye's theory of `systemic stress' based in physiology and psychobiology, and the `psychological stress' model developed by Lazarus. In the second part, the concept of coping is described. Coping theories may be classified according to two independent parameters: traitoriented versus state-oriented, and microanalytic versus macroanalytic approaches. The multitude of theoretical conceptions is based on the macroanalytic, trait-oriented approach. Examples of this approach that are presented in this article are `repression–sensitization,' `monitoringblunting,' and the `model of coping modes.' The article closes with a brief outline of future perspectives in stress and coping research.", "title": "" }, { "docid": "neg:1840232_9", "text": "We consider the problem of learning causal directed acyclic graphs from an observational joint distribution. One can use these graphs to predict the outcome of interventional experiments, from which data are often not available. We show that if the observational distribution follows a structural equation model with an additive noise structure, the directed acyclic graph becomes identifiable from the distribution under mild conditions. This constitutes an interesting alternative to traditional methods that assume faithfulness and identify only the Markov equivalence class of the graph, thus leaving some edges undirected. We provide practical algorithms for finitely many samples, RESIT (regression with subsequent independence test) and two methods based on an independence score. We prove that RESIT is correct in the population setting and provide an empirical evaluation.", "title": "" }, { "docid": "neg:1840232_10", "text": "One hundred and two olive RAPD profiles were sampled from all around the Mediterranean Basin. Twenty four clusters of RAPD profiles were shown in the dendrogram based on the Ward’s minimum variance algorithm using chi-square distances. Factorial discriminant analyses showed that RAPD profiles were correlated with the use of the fruits and the country or region of origin of the cultivars. This suggests that cultivar selection has occurred in different genetic pools and in different areas. Mitochondrial DNA RFLP analyses were also performed. These mitotypes supported the conclusion also that multilocal olive selection has occurred. This prediction for the use of cultivars will help olive growers to choose new foreign cultivars for testing them before an eventual introduction if they are well adapted to local conditions.", "title": "" }, { "docid": "neg:1840232_11", "text": "Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest. However, current techniques for training generative models require access to fully-observed samples. In many settings, it is expensive or even impossible to obtain fullyobserved samples, but economical to obtain partial, noisy observations. We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest. We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models. Based on this, we propose a new method of training Generative Adversarial Networks (GANs) which we call AmbientGAN. On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements. Generative models trained with our method can obtain 2-4x higher inception scores than the baselines.", "title": "" }, { "docid": "neg:1840232_12", "text": "Human activity recognition involves classifying times series data, measured at inertial sensors such as accelerometers or gyroscopes, into one of pre-defined actions. Recently, convolutional neural network (CNN) has established itself as a powerful technique for human activity recognition, where convolution and pooling operations are applied along the temporal dimension of sensor signals. In most of existing work, 1D convolution operation is applied to individual univariate time series, while multi-sensors or multi-modality yield multivariate time series. 2D convolution and pooling operations are applied to multivariate time series, in order to capture local dependency along both temporal and spatial domains for uni-modal data, so that it achieves high performance with less number of parameters compared to 1D operation. However for multi-modal data existing CNNs with 2D operation handle different modalities in the same way, which cause interferences between characteristics from different modalities. In this paper, we present CNNs (CNN-pf and CNN-pff), especially CNN-pff, for multi-modal data. We employ both partial weight sharing and full weight sharing for our CNN models in such a way that modality-specific characteristics as well as common characteristics across modalities are learned from multi-modal (or multi-sensor) data and are eventually aggregated in upper layers. Experiments on benchmark datasets demonstrate the high performance of our CNN models, compared to state of the arts methods.", "title": "" }, { "docid": "neg:1840232_13", "text": "An N-dimensional image is divided into “object” and “background” segments using a graph cut approach. A graph is formed by connecting all pairs of neighboring image pixels (voxels) by weighted edges. Certain pixels (voxels) have to be a priori identified as object or background seeds providing necessary clues about the image content. Our objective is to find the cheapest way to cut the edges in the graph so that the object seeds are completely separated from the background seeds. If the edge cost is a decreasing function of the local intensity gradient then the minimum cost cut should produce an object/background segmentation with compact boundaries along the high intensity gradient values in the image. An efficient, globally optimal solution is possible via standard min-cut/max-flow algorithms for graphs with two terminals. We applied this technique to interactively segment organs in various 2D and 3D medical images.", "title": "" }, { "docid": "neg:1840232_14", "text": "Context: The security goals of a software system provide a foundation for security requirements engineering. Identifying security goals is a process of iteration and refinement, leveraging the knowledge and expertise of the analyst to secure not only the core functionality but the security mechanisms as well. Moreover, a comprehensive security plan should include goals for not only preventing a breach, but also for detecting and appropriately responding in case a breach does occur. Goal: The objective of this research is to support analysts in security requirements engineering by providing a framework that supports a systematic and comprehensive discovery of security goals for a software system. Method: We develop a framework, Discovering Goals for Security (DIGS), that models the key entities in information security, including assets and security goals. We systematically develop a set of security goal patterns that capture multiple dimensions of security for assets. DIGS explicitly captures the relations and assumptions that underlie security goals to elicit implied goals. We map the goal patterns to NIST controls to help in operationalizing the goals. We evaluate DIGS via a controlled experiment where 28 participants analyzed systems from mobile banking and human resource management domains. Results: Participants considered security goals commensurate to the knowledge available to them. Although the overall recall was low given the empirical constraints, participants using DIGS identified more implied goals and felt more confident in completing the task. Conclusion: Explicitly providing the additional knowledge for the identification of implied security goals significantly increased the chances of discovering such goals, thereby improving coverage of stakeholder security requirements, even if they are unstated.", "title": "" }, { "docid": "neg:1840232_15", "text": "UNLABELLED\nRectal prolapse is the partial or complete protrusion of the rectal wall into the anal canal. The most common etiology consists in the insufficiency of the diaphragm of the lesser pelvis and anal sphincter apparatus. Methods of surgical treatment involve perineal or abdominal approach surgical procedures. The aim of the study was to present the method of surgical rectal prolapse treatment, according to Mikulicz's procedure by means of the perineal approach, based on our own experience and literature review.\n\n\nMATERIAL AND METHODS\nThe study group comprised 16 patients, including 14 women and 2 men, aged between 38 and 82 years admitted to the department, due to rectal prolapse, during the period between 2000 and 2012. Nine female patients, aged between 68 and 82 years (mean age-76.3 years) with fullthickness rectal prolapse underwent surgery by means of Mikulicz's method with levator muscle and external anal sphincter plasty. The most common comorbidities amongst patients operated by means of Mikulicz's method included cardiovascular and metabolic diseases.\n\n\nRESULTS\nMean hospitalization was 14.4 days (ranging between 12 and 17 days). Despite advanced age and poor general condition of the patients, complications during the perioperative period were not observed. Good early and late functional results were achieved. The degree of anal sphincter continence was determined 6-8 weeks after surgery showing significant improvement, as compared to results obtained prior to surgery. One case of recurrence consisting in mucosal prolapse was noted, being treated surgically by means of Whitehead's method. Good treatment results were observed.\n\n\nCONCLUSION\nTransperineal rectosigmoidectomy using Mikulicz's method with levator muscle and external anal sphincter plasty seems to be an effective, minimally invasive and relatively safe procedure that does not require general anesthesia. It is recommended in case of patients with significant comorbidities and high surgical risk.", "title": "" }, { "docid": "neg:1840232_16", "text": "Data embedding is used in many machine learning applications to create low-dimensional feature representations, which preserves the structure of data points in their original space. In this paper, we examine the scenario of a heterogeneous network with nodes and content of various types. Such networks are notoriously difficult to mine because of the bewildering combination of heterogeneous contents and structures. The creation of a multidimensional embedding of such data opens the door to the use of a wide variety of off-the-shelf mining techniques for multidimensional data. Despite the importance of this problem, limited efforts have been made on embedding a network of scalable, dynamic and heterogeneous data. In such cases, both the content and linkage structure provide important cues for creating a unified feature representation of the underlying network. In this paper, we design a deep embedding algorithm for networked data. A highly nonlinear multi-layered embedding function is used to capture the complex interactions between the heterogeneous data in a network. Our goal is to create a multi-resolution deep embedding function, that reflects both the local and global network structures, and makes the resulting embedding useful for a variety of data mining tasks. In particular, we demonstrate that the rich content and linkage information in a heterogeneous network can be captured by such an approach, so that similarities among cross-modal data can be measured directly in a common embedding space. Once this goal has been achieved, a wide variety of data mining problems can be solved by applying off-the-shelf algorithms designed for handling vector representations. Our experiments on real-world network datasets show the effectiveness and scalability of the proposed algorithm as compared to the state-of-the-art embedding methods.", "title": "" }, { "docid": "neg:1840232_17", "text": "In this paper we introduce a gamification model for encouraging sustainable multi-modal urban travel in modern European cities. Our aim is to provide a mechanism that encourages users to reflect on their current travel behaviours and to engage in more environmentally friendly activities that lead to the formation of sustainable, long-term travel behaviours. To achieve this our users track their own behaviours, set goals, manage their progress towards those goals, and respond to challenges. Our approach uses a point accumulation and level achievement metaphor to abstract from the underlying specifics of individual behaviours and goals to allow an Simon Wells University of Aberdeen, Computing Science, Meston building, Meston Walk, Aberdeen, AB24 3UE e-mail: simon.wells@abdn.ac.uk Henri Kotkanen University of Helsinki, Department of Computer Science, P.O. 68 (Gustaf Hllstrmin katu 2b), FI00014 UNIVERSITY OF HELSINKI, FINLAND e-mail: henri.kotkanen@helsinki.fi Michael Schlafli University of Aberdeen, Computing Science, Meston building, Meston Walk, Aberdeen, AB24 3UE e-mail: michael.schlafli@abdn.ac.uk Silvia Gabrielli CREATE-NET Via alla Cascata 56/D Povo 38123 Trento Italy e-mail: silvia.gabrielli@createnet.org Judith Masthoff University of Aberdeen, Computing Science, Meston building, Meston Walk, Aberdeen, AB24 3UE e-mail: j.masthoff@abdn.ac.uk Antti Jylhä University of Helsinki, Department of Computer Science, P.O. 68 (Gustaf Hllstrmin katu 2b), FI00014 UNIVERSITY OF HELSINKI, FINLAND e-mail: antti.jylha@cs.helsinki.fi Paula Forbes University of Aberdeen, Computing Science, Meston building, Meston Walk, Aberdeen, AB24 3UE e-mail: paula.forbes@abdn.ac.uk", "title": "" }, { "docid": "neg:1840232_18", "text": "Among the tests of a leader, few are more challenging-and more painful-than recovering from a career catastrophe. Most fallen leaders, in fact, don't recover. Still, two decades of consulting experience, scholarly research, and their own personal experiences have convinced the authors that leaders can triumph over tragedy--if they do so deliberately. Great business leaders have much in common with the great heroes of universal myth, and they can learn to overcome profound setbacks by thinking in heroic terms. First, they must decide whether or not to fight back. Either way, they must recruit others into their battle. They must then take steps to recover their heroic status, in the process proving, both to others and to themselves, that they have the mettle necessary to recover their heroic mission. Bernie Marcus exemplifies this process. Devastated after Sandy Sigoloff ired him from Handy Dan, Marcus decided to forgo the distraction of litigation and instead make the marketplace his batttleground. Drawing from his network of carefully nurtured relationships with both close and more distant acquaintances, Marcus was able to get funding for a new venture. He proved that he had the mettle, and recovered his heroic status, by building Home Depot, whose entrepreneurial spirit embodied his heroic mission. As Bank One's Jamie Dimon, J.Crew's Mickey Drexler, and even Jimmy Carter, Martha Stewart, and Michael Milken have shown, stunning comebacks are possible in all industries and walks of life. Whatever the cause of your predicament, it makes sense to get your story out. The alternative is likely to be long-lasting unemployment. If the facts of your dismissal cannot be made public because they are damning, then show authentic remorse. The public is often enormously forgiving when it sees genuine contrition and atonement.", "title": "" }, { "docid": "neg:1840232_19", "text": "This paper describes GL4D, an interactive system for visualizing 2-manifolds and 3-manifolds embedded in four Euclidean dimensions and illuminated by 4D light sources. It is a tetrahedron-based rendering pipeline that projects geometry into volume images, an exact parallel to the conventional triangle-based rendering pipeline for 3D graphics. Novel features include GPU-based algorithms for real-time 4D occlusion handling and transparency compositing; we thus enable a previously impossible level of quality and interactivity for exploring lit 4D objects. The 4D tetrahedrons are stored in GPU memory as vertex buffer objects, and the vertex shader is used to perform per-vertex 4D modelview transformations and 4D-to-3D projection. The geometry shader extension is utilized to slice the projected tetrahedrons and rasterize the slices into individual 2D layers of voxel fragments. Finally, the fragment shader performs per-voxel operations such as lighting and alpha blending with previously computed layers. We account for 4D voxel occlusion along the 4D-to-3D projection ray by supporting a multi-pass back-to-front fragment composition along the projection ray; to accomplish this, we exploit a new adaptation of the dual depth peeling technique to produce correct volume image data and to simultaneously render the resulting volume data using 3D transfer functions into the final 2D image. Previous CPU implementations of the rendering of 4D-embedded 3-manifolds could not perform either the 4D depth-buffered projection or manipulation of the volume-rendered image in real-time; in particular, the dual depth peeling algorithm is a novel GPU-based solution to the real-time 4D depth-buffering problem. GL4D is implemented as an integrated OpenGL-style API library, so that the underlying shader operations are as transparent as possible to the user.", "title": "" } ]
1840233
Terminology Extraction: An Analysis of Linguistic and Statistical Approaches
[ { "docid": "pos:1840233_0", "text": "Most recent research in trainable part of speech taggers has explored stochastic tagging. While these taggers obtain high accuracy, linguistic information is captured indirectly, typically in tens of thousands of lexical and contextual probabilities. In (Brill 1992), a trainable rule-based tagger was described that obtained performance comparable to that of stochastic taggers, but captured relevant linguistic information in a small number of simple non-stochastic rules. In this paper, we describe a number of extensions to this rule-based tagger. First, we describe a method for expressing lexical relations in tagging that stochastic taggers are currently unable to express. Next, we show a rule-based approach to tagging unknown words. Finally, we show how the tagger can be extended into a k-best tagger, where multiple tags can be assigned to words in some cases of uncertainty.", "title": "" } ]
[ { "docid": "neg:1840233_0", "text": "We seek a complete description for the neurome of the Drosophila, which involves tracing more than 20,000 neurons. The currently available tracings are sensitive to background clutter and poor contrast of the images. In this paper, we present Tree2Tree2, an automatic neuron tracing algorithm to segment neurons from 3D confocal microscopy images. Building on our previous work in segmentation [1], this method uses an adaptive initial segmentation to detect the neuronal portions, as opposed to a global strategy that often results in under segmentation. In order to connect the disjoint portions, we use a technique called Path Search, which is based on a shortest path approach. An intelligent pruning step is also implemented to delete undesired branches. Tested on 3D confocal microscopy images of GFP labeled Drosophila neurons, the visual and quantitative results suggest that Tree2Tree2 is successful in automatically segmenting neurons in images plagued by background clutter and filament discontinuities.", "title": "" }, { "docid": "neg:1840233_1", "text": "Active learning—a class of algorithms that iteratively searches for the most informative samples to include in a training dataset—has been shown to be effective at annotating data for image classification. However, the use of active learning for object detection is still largely unexplored as determining informativeness of an object-location hypothesis is more difficult. In this paper, we address this issue and present two metrics for measuring the informativeness of an object hypothesis, which allow us to leverage active learning to reduce the amount of annotated data needed to achieve a target object detection performance. Our first metric measures “localization tightness” of an object hypothesis, which is based on the overlapping ratio between the region proposal and the final prediction. Our second metric measures “localization stability” of an object hypothesis, which is based on the variation of predicted object locations when input images are corrupted by noise. Our experimental results show that by augmenting a conventional active-learning algorithm designed for classification with the proposed metrics, the amount of labeled training data required can be reduced up to 25%. Moreover, on PASCAL 2007 and 2012 datasets our localization-stability method has an average relative improvement of 96.5% and 81.9% over the base-line method using classification only. Asian Conference on Computer Vision This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2018 201 Broadway, Cambridge, Massachusetts 02139 Localization-Aware Active Learning for Object", "title": "" }, { "docid": "neg:1840233_2", "text": "Semantic Textual Similarity (STS) measures the degree of semantic equivalence between two segments of text, even though the similar context is expressed using different words. The textual segments are word phrases, sentences, paragraphs or documents. The similarity can be measured using lexical, syntactic and semantic information embedded in the sentences. The STS task in SemEval workshop is viewed as a regression problem, where real-valued output is clipped to the range 0-5 on a sentence pair. In this paper, empirical evaluations are carried using lexical, syntactic and semantic features on STS 2016 dataset. A new syntactic feature, Phrase Entity Alignment (PEA) is proposed. A phrase entity is a conceptual unit in a sentence with a subject or an object and its describing words. PEA aligns phrase entities present in the sentences based on their similarity scores. STS score is measured by combing the similarity scores of all aligned phrase entities. The impact of PEA on semantic textual equivalence is depicted using Pearson correlation between system generated scores and the human annotations. The proposed system attains a mean score of 0.7454 using random forest regression model. The results indicate that the system using the lexical, syntactic and semantic features together with PEA feature perform comparably better than existing systems.", "title": "" }, { "docid": "neg:1840233_3", "text": "This essay focuses on possible nonhuman applications of CRISPR/Cas9 that are likely to be widely overlooked because they are unexpected and, in some cases, perhaps even \"frivolous.\" We look at five uses for \"CRISPR Critters\": wild de-extinction, domestic de-extinction, personal whim, art, and novel forms of disease prevention. We then discuss the current regulatory framework and its possible limitations in those contexts. We end with questions about some deeper issues raised by the increased human control over life on earth offered by genome editing.", "title": "" }, { "docid": "neg:1840233_4", "text": "The large amount of videos popping up every day, make it is more and more critical that key information within videos can be extracted and understood in a very short time. Video summarization, the task of finding the smallest subset of frames, which still conveys the whole story of a given video, is thus of great significance to improve efficiency of video understanding. In this paper, we propose a novel Dilated Temporal Relational Generative Adversarial Network (DTR-GAN) to achieve framelevel video summarization. Given a video, it can select a set of key frames, which contains the most meaningful and compact information. Specifically, DTR-GAN learns a dilated temporal relational generator and a discriminator with three-player loss in an adversarial manner. A new dilated temporal relation (DTR) unit is introduced for enhancing temporal representation capturing. The generator aims to select key frames by using DTR units to effectively exploit global multi-scale temporal context and to complement the commonly used Bi-LSTM. To ensure that the summaries capture enough key video representation from a global perspective rather than a trivial randomly shorten sequence, we present a discriminator that learns to enforce both the information completeness and compactness of summaries via a three-player loss. The three-player loss includes the generated summary loss, the random summary loss, and the real summary (ground-truth) loss, which play important roles for better regularizing the learned model to obtain useful summaries. Comprehensive experiments on two public datasets SumMe and TVSum show the superiority of our DTR-GAN over the stateof-the-art approaches.", "title": "" }, { "docid": "neg:1840233_5", "text": "To solve the big topic modeling problem, we need to reduce both time and space complexities of batch latent Dirichlet allocation (LDA) algorithms. Although parallel LDA algorithms on the multi-processor architecture have low time and space complexities, their communication costs among processors often scale linearly with the vocabulary size and the number of topics, leading to a serious scalability problem. To reduce the communication complexity among processors for a better scalability, we propose a novel communication-efficient parallel topic modeling architecture based on power law, which consumes orders of magnitude less communication time when the number of topics is large. We combine the proposed communication-efficient parallel architecture with the online belief propagation (OBP) algorithm referred to as POBP for big topic modeling tasks. Extensive empirical results confirm that POBP has the following advantages to solve the big topic modeling problem: 1) high accuracy, 2) communication-efficient, 3) fast speed, and 4) constant memory usage when compared with recent state-of-the-art parallel LDA algorithms on the multi-processor architecture. Index Terms —Big topic modeling, latent Dirichlet allocation, communication complexity, multi-processor architecture, online belief propagation, power law.", "title": "" }, { "docid": "neg:1840233_6", "text": "Over the past few years, the realm of embedded systems has expanded to include a wide variety of products, ranging from digital cameras, to sensor networks, to medical imaging systems. Consequently, engineers strive to create ever smaller and faster products, many of which have stringent power requirements. Coupled with increasing pressure to decrease costs and time-to-market, the design constraints of embedded systems pose a serious challenge to embedded systems designers. Reconfigurable hardware can provide a flexible and efficient platform for satisfying the area, performance, cost, and power requirements of many embedded systems. This article presents an overview of reconfigurable computing in embedded systems, in terms of benefits it can provide, how it has already been used, design issues, and hurdles that have slowed its adoption.", "title": "" }, { "docid": "neg:1840233_7", "text": "INTRODUCTION\nFluorescence anisotropy (FA) is one of the major established methods accepted by industry and regulatory agencies for understanding the mechanisms of drug action and selecting drug candidates utilizing a high-throughput format.\n\n\nAREAS COVERED\nThis review covers the basics of FA and complementary methods, such as fluorescence lifetime anisotropy and their roles in the drug discovery process. The authors highlight the factors affecting FA readouts, fluorophore selection and instrumentation. Furthermore, the authors describe the recent development of a successful, commercially valuable FA assay for long QT syndrome drug toxicity to illustrate the role that FA can play in the early stages of drug discovery.\n\n\nEXPERT OPINION\nDespite the success in drug discovery, the FA-based technique experiences competitive pressure from other homogeneous assays. That being said, FA is an established yet rapidly developing technique, recognized by academic institutions, the pharmaceutical industry and regulatory agencies across the globe. The technical problems encountered in working with small molecules in homogeneous assays are largely solved, and new challenges come from more complex biological molecules and nanoparticles. With that, FA will remain one of the major work-horse techniques leading to precision (personalized) medicine.", "title": "" }, { "docid": "neg:1840233_8", "text": "Our work in this paper presents a prediction of quality of experience based on full reference parametric (SSIM, VQM) and application metrics (resolution, bit rate, frame rate) in SDN networks. First, we used DCR (Degradation Category Rating) as subjective method to build the training model and validation, this method is based on not only the quality of received video but also the original video but all subjective methods are too expensive, don't take place in real time and takes much time for example our method takes three hours to determine the average MOS (Mean Opinion Score). That's why we proposed novel method based on machine learning algorithms to obtain the quality of experience in an objective manner. Previous researches in this field help us to use four algorithms: Decision Tree (DT), Neural Network, K nearest neighbors KNN and Random Forest RF thanks to their efficiency. We have used two metrics recommended by VQEG group to assess the best algorithm: Pearson correlation coefficient r and Root-Mean-Square-Error RMSE. The last part of the paper describes environment based on: Weka to analyze ML algorithms, MSU tool to calculate SSIM and VQM and Mininet for the SDN simulation.", "title": "" }, { "docid": "neg:1840233_9", "text": "There is great interest in developing rechargeable lithium batteries with higher energy capacity and longer cycle life for applications in portable electronic devices, electric vehicles and implantable medical devices. Silicon is an attractive anode material for lithium batteries because it has a low discharge potential and the highest known theoretical charge capacity (4,200 mAh g(-1); ref. 2). Although this is more than ten times higher than existing graphite anodes and much larger than various nitride and oxide materials, silicon anodes have limited applications because silicon's volume changes by 400% upon insertion and extraction of lithium which results in pulverization and capacity fading. Here, we show that silicon nanowire battery electrodes circumvent these issues as they can accommodate large strain without pulverization, provide good electronic contact and conduction, and display short lithium insertion distances. We achieved the theoretical charge capacity for silicon anodes and maintained a discharge capacity close to 75% of this maximum, with little fading during cycling.", "title": "" }, { "docid": "neg:1840233_10", "text": "Dynamic programming (DP) is a powerful paradigm for general, nonlinear optimal control. Computing exact DP solutions is in general only possible when the process states and the control actions take values in a small discrete set. In practice, it is necessary to approximate the solutions. Therefore, we propose an algorithm for approximate DP that relies on a fuzzy partition of the state space, and on a discretization of the action space. This fuzzy Q-iteration algorithmworks for deterministic processes, under the discounted return criterion. We prove that fuzzy Q -iteration asymptotically converges to a solution that lies within a bound of the optimal solution. A bound on the suboptimality of the solution obtained in a finite number of iterations is also derived. Under continuity assumptions on the dynamics and on the reward function, we show that fuzzyQ -iteration is consistent, i.e., that it asymptotically obtains the optimal solution as the approximation accuracy increases. These properties hold both when the parameters of the approximator are updated in a synchronous fashion, and when they are updated asynchronously. The asynchronous algorithm is proven to converge at least as fast as the synchronous one. The performance of fuzzy Q iteration is illustrated in a two-link manipulator control problem. © 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840233_11", "text": "A growing number of children and adolescents are involved in resistance training in schools, fitness centers, and sports training facilities. In addition to increasing muscular strength and power, regular participation in a pediatric resistance training program may have a favorable influence on body composition, bone health, and reduction of sports-related injuries. Resistance training targeted to improve low fitness levels, poor trunk strength, and deficits in movement mechanics can offer observable health and fitness benefits to young athletes. However, pediatric resistance training programs need to be well-designed and supervised by qualified professionals who understand the physical and psychosocial uniqueness of children and adolescents. The sensible integration of different training methods along with the periodic manipulation of programs design variables over time will keep the training stimulus effective, challenging, and enjoyable for the participants.", "title": "" }, { "docid": "neg:1840233_12", "text": "A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity and policy dependence in place cells suggests that the representation is not purely spatial. We approach this puzzle from a reinforcement learning perspective: what kind of spatial representation is most useful for maximizing future reward? We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. Furthermore, we argue that entorhinal grid cells encode a low-dimensionality basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.", "title": "" }, { "docid": "neg:1840233_13", "text": "Communication is an interactive, complex, structured process involving agents that are capable of drawing conclusions from the information they have available about some real-life situations. Such situations are generally characterized as being imperfect. In this paper, we aim to address learning from the perspective of the communication between agents. To learn a collection of propositions concerning some situation is to incorporate it within one's knowledge about that situation. That is, the key factor in this activity is for the goal agent, where agents may switch role if appropriate, to integrate the information offered with what it already knows. This may require a process of belief revision, which suggests that the process of incorporation of new information should be modeled nonmonotonically. We shall employ for reasoning a three-valued based nonmonotonic logic that formalizes some aspects of revisable reasoning and it is accessible to implementation. The logic is sound and complete. A theorem-prover of the logic has successfully been implemented.", "title": "" }, { "docid": "neg:1840233_14", "text": "A long-standing focus on compliance has traditionally constrained development of fundamental design changes for Electronic Health Records (EHRs). We now face a critical need for such innovation, as personalization and data science prompt patients to engage in the details of their healthcare and restore agency over their medical data. In this paper, we propose MedRec: a novel, decentralized record management system to handle EHRs, using blockchain technology. Our system gives patients a comprehensive, immutable log and easy access to their medical information across providers and treatment sites. Leveraging unique blockchain properties, MedRec manages authentication, confidentiality, accountability and data sharing—crucial considerations when handling sensitive information. A modular design integrates with providers' existing, local data storage solutions, facilitating interoperability and making our system convenient and adaptable. We incentivize medical stakeholders (researchers, public health authorities, etc.) to participate in the network as blockchain “miners”. This provides them with access to aggregate, anonymized data as mining rewards, in return for sustaining and securing the network via Proof of Work. MedRec thus enables the emergence of data economics, supplying big data to empower researchers while engaging patients and providers in the choice to release metadata. The purpose of this paper is to expose, in preparation for field tests, a working prototype through which we analyze and discuss our approach and the potential for blockchain in health IT and research.", "title": "" }, { "docid": "neg:1840233_15", "text": "After being introduced in 2009, the first fully homomorphic encryption (FHE) scheme has created significant excitement in academia and industry. Despite rapid advances in the last 6 years, FHE schemes are still not ready for deployment due to an efficiency bottleneck. Here we introduce a custom hardware accelerator optimized for a class of reconfigurable logic to bring LTV based somewhat homomorphic encryption (SWHE) schemes one step closer to deployment in real-life applications. The accelerator we present is connected via a fast PCIe interface to a CPU platform to provide homomorphic evaluation services to any application that needs to support blinded computations. Specifically we introduce a number theoretical transform based multiplier architecture capable of efficiently handling very large polynomials. When synthesized for the Xilinx Virtex 7 family the presented architecture can compute the product of large polynomials in under 6.25 msec making it the fastest multiplier design of its kind currently available in the literature and is more than 102 times faster than a software implementation. Using this multiplier we can compute a relinearization operation in 526 msec. When used as an accelerator, for instance, to evaluate the AES block cipher, we estimate a per block homomorphic evaluation performance of 442 msec yielding performance gains of 28.5 and 17 times over similar CPU and GPU implementations, respectively.", "title": "" }, { "docid": "neg:1840233_16", "text": "This paper presents an algorithm for fingerprint image restoration using Digital Reaction-Diffusion System (DRDS). The DRDS is a model of a discrete-time discrete-space nonlinear reaction-diffusion dynamical system, which is useful for generating biological textures, patterns and structures. This paper focuses on the design of a fingerprint restoration algorithm that combines (i) a ridge orientation estimation technique using an iterative coarse-to-fine processing strategy and (ii) an adaptive DRDS having a capability of enhancing low-quality fingerprint images using the estimated ridge orientation. The phase-only image matching technique is employed for evaluating the similarity between an original fingerprint image and a restored image. The proposed algorithm may be useful for person identification applications using fingerprint images. key words: reaction-diffusion system, pattern formation, digital signal processing, digital filters, fingerprint restoration", "title": "" }, { "docid": "neg:1840233_17", "text": "The present research tested the hypothesis that concepts of gratitude are prototypically organized and explored whether lay concepts of gratitude are broader than researchers' concepts of gratitude. In five studies, evidence was found that concepts of gratitude are indeed prototypically organized. In Study 1, participants listed features of gratitude. In Study 2, participants reliably rated the centrality of these features. In Studies 3a and 3b, participants perceived that a hypothetical other was experiencing more gratitude when they read a narrative containing central as opposed to peripheral features. In Study 4, participants remembered more central than peripheral features in gratitude narratives. In Study 5a, participants generated more central than peripheral features when they wrote narratives about a gratitude incident, and in Studies 5a and 5b, participants generated both more specific and more generalized types of gratitude in similar narratives. Throughout, evidence showed that lay conceptions of gratitude are broader than current research definitions.", "title": "" }, { "docid": "neg:1840233_18", "text": "An important goal of scientific data analysis is to understand the behavior of a system or process based on a sample of the system. In many instances it is possible to observe both input parameters and system outputs, and characterize the system as a high-dimensional function. Such data sets arise, for instance, in large numerical simulations, as energy landscapes in optimization problems, or in the analysis of image data relating to biological or medical parameters. This paper proposes an approach to analyze and visualizing such data sets. The proposed method combines topological and geometric techniques to provide interactive visualizations of discretely sampled high-dimensional scalar fields. The method relies on a segmentation of the parameter space using an approximate Morse-Smale complex on the cloud of point samples. For each crystal of the Morse-Smale complex, a regression of the system parameters with respect to the output yields a curve in the parameter space. The result is a simplified geometric representation of the Morse-Smale complex in the high dimensional input domain. Finally, the geometric representation is embedded in 2D, using dimension reduction, to provide a visualization platform. The geometric properties of the regression curves enable the visualization of additional information about each crystal such as local and global shape, width, length, and sampling densities. The method is illustrated on several synthetic examples of two dimensional functions. Two use cases, using data sets from the UCI machine learning repository, demonstrate the utility of the proposed approach on real data. Finally, in collaboration with domain experts the proposed method is applied to two scientific challenges. The analysis of parameters of climate simulations and their relationship to predicted global energy flux and the concentrations of chemical species in a combustion simulation and their integration with temperature.", "title": "" } ]
1840234
A Blackboard Based Hybrid Multi-Agent System for Improving Classification Accuracy Using Reinforcement Learning Techniques
[ { "docid": "pos:1840234_0", "text": "This paper reviews the supervised learning versions of the no-free-lunch theorems in a simpli ed form. It also discusses the signi cance of those theorems, and their relation to other aspects of supervised learning.", "title": "" }, { "docid": "pos:1840234_1", "text": "Brain tumor segmentation is an important task in medical image processing. Early diagnosis of brain tumors plays an important role in improving treatment possibilities and increases the survival rate of the patients. Manual segmentation of the brain tumors for cancer diagnosis, from large amount of MRI images generated in clinical routine, is a difficult and time consuming task. There is a need for automatic brain tumor image segmentation. The purpose of this paper is to provide a review of MRI-based brain tumor segmentation methods. Recently, automatic segmentation using deep learning methods proved popular since these methods achieve the state-of-the-art results and can address this problem better than other methods. Deep learning methods can also enable efficient processing and objective evaluation of the large amounts of MRI-based image data. There are number of existing review papers, focusing on traditional methods for MRI-based brain tumor image segmentation. Different than others, in this paper, we focus on the recent trend of deep learning methods in this field. First, an introduction to brain tumors and methods for brain tumor segmentation is given. Then, the state-of-the-art algorithms with a focus on recent trend of deep learning methods are discussed. Finally, an assessment of the current state is presented and future developments to standardize MRI-based brain tumor segmentation methods into daily clinical routine are addressed. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the Organizing Committee of ICAFS 2016.", "title": "" } ]
[ { "docid": "neg:1840234_0", "text": "Psychological First Aid (PFA) is the recommended immediate psychosocial response during crises. As PFA is now widely implemented in crises worldwide, there are increasing calls to evaluate its effectiveness. World Vision used PFA as a fundamental component of their emergency response following the 2014 conflict in Gaza. Anecdotal reports from Gaza suggest a range of benefits for those who received PFA. Though not intending to undertake rigorous research, World Vision explored learnings about PFA in Gaza through Focus Group Discussions with PFA providers, Gazan women, men and children and a Key Informant Interview with a PFA trainer. The qualitative analyses aimed to determine if PFA helped individuals to feel safe, calm, connected to social supports, hopeful and efficacious - factors suggested by the disaster literature to promote coping and recovery (Hobfoll et al., 2007). Results show positive psychosocial benefits for children, women and men receiving PFA, confirming that PFA contributed to: safety, reduced distress, ability to engage in calming practices and to support each other, and a greater sense of control and hopefulness irrespective of their adverse circumstances. The data shows that PFA formed an important part of a continuum of care to meet psychosocial needs in Gaza and served as a gateway for addressing additional psychosocial support needs. A \"whole-of-family\" approach to PFA showed particularly strong impacts and strengthened relationships. Of note, the findings from World Vision's implementation of PFA in Gaza suggests that future PFA research go beyond a narrow focus on clinical outcomes, to a wider examination of psychosocial, familial and community-based outcomes.", "title": "" }, { "docid": "neg:1840234_1", "text": "One of the constraints in the design of dry switchable adhesives is the compliance trade-off: compliant structures conform better to surfaces but are limited in strength due to high stored strain energy. In this work we study the effects of bending compliance on the shear adhesion pressures of hybrid electrostatic/gecko-like adhesives of various areas. We reaffirm that normal electrostatic preload increases contact area and show that it is more effective on compliant adhesives. We also show that the gain in contact area can compensate for low shear stiffness and adhesives with high bending compliance outperform stiffer adhesives on substrates with large scale roughness.", "title": "" }, { "docid": "neg:1840234_2", "text": "Financial portfolio management is the process of constant redistribution of a fund into different financial products. This paper presents a financial-model-free Reinforcement Learning framework to provide a deep machine learning solution to the portfolio management problem. The framework consists of the Ensemble of Identical Independent Evaluators (EIIE) topology, a Portfolio-Vector Memory (PVM), an Online Stochastic Batch Learning (OSBL) scheme, and a fully exploiting and explicit reward function. This framework is realized in three instants in this work with a Convolutional Neural Network (CNN), a basic Recurrent Neural Network (RNN), and a Long Short-Term Memory (LSTM). They are, along with a number of recently reviewed or published portfolio-selection strategies, examined in three back-test experiments with a trading period of 30 minutes in a cryptocurrency market. Cryptocurrencies are electronic and decentralized alternatives to government-issued money, with Bitcoin as the best-known example of a cryptocurrency. All three instances of the framework monopolize the top three positions in all experiments, outdistancing other compared trading algorithms. Although with a high commission rate of 0.25% in the backtests, the framework is able to achieve at least 4-fold returns in 50 days.", "title": "" }, { "docid": "neg:1840234_3", "text": "Objects often occlude each other in scenes; Inferring their appearance beyond their visible parts plays an important role in scene understanding, depth estimation, object interaction and manipulation. In this paper, we study the challenging problem of completing the appearance of occluded objects. Doing so requires knowing which pixels to paint (segmenting the invisible parts of objects) and what color to paint them (generating the invisible parts). Our proposed novel solution, SeGAN, jointly optimizes for both segmentation and generation of the invisible parts of objects. Our experimental results show that: (a) SeGAN can learn to generate the appearance of the occluded parts of objects; (b) SeGAN outperforms state-of-the-art segmentation baselines for the invisible parts of objects; (c) trained on synthetic photo realistic images, SeGAN can reliably segment natural images; (d) by reasoning about occluder-occludee relations, our method can infer depth layering.", "title": "" }, { "docid": "neg:1840234_4", "text": "This paper presents a novel algorithm aiming at analysis and identification of faces viewed from different poses and illumination conditions. Face analysis from a single image is performed by recovering the shape and textures parameters of a 3D Morphable Model in an analysis-by-synthesis fashion. The shape parameters are computed from a shape error estimated by optical flow and the texture parameters are obtained from a texture error. The algorithm uses linear equations to recover the shape and texture parameters irrespective of pose and lighting conditions of the face image. Identification experiments are reported on more than 5000 images from the publicly available CMU-PIE database which includes faces viewed from 13 different poses and under 22 different illuminations. Extensive identification results are available on our web page for future comparison with novel algorithms.", "title": "" }, { "docid": "neg:1840234_5", "text": "From a certain (admittedly narrow) perspective, one of the annoying features of natural language is the ubiquitous syntactic ambiguity. For a computational model intended to assign syntactic descriptions to natural language text, this seem like a design defect. In general, when context and lexical content are taken into account, such syntactic ambiguity can be resolved: sentences used in context show, for the most part, little ambiguity. But the grammar provides many alternative analyses, and gives little guidance about resolving the ambiguity. Prepositional phrase attachment is the canonical case of structural ambiguity, as in the time worn example,", "title": "" }, { "docid": "neg:1840234_6", "text": "Best practice reference models like COBIT, ITIL, and CMMI offer methodical support for the various tasks of IT management and IT governance. Observations reveal that the ways of using these models as well as the motivations and further aspects of their application differ significantly. Rather the models are used in individual ways due to individual interpretations. From an academic point of view we can state, that how these models are actually used as well as the motivations using them is not well understood. We develop a framework in order to structure different dimensions and modes of reference model application in practice. The development is based on expert interviews and a literature review. Hence we use design oriented and qualitative research methods to develop an artifact, a ‘framework of reference model application’. This framework development is the first step in a larger research program which combines different methods of research. The first goal is to deepen insight and improve understanding. In future research, the framework will be used to survey and analyze reference model application. The authors assume that “typical” application patterns exist beyond individual dimensions of application. The framework developed provides an opportunity of a systematically collection of data thereon. Furthermore, the so far limited knowledge of reference model application complicates their implementation as well as their use. Thus, detailed knowledge of different application patterns is required for effective support of enterprises using reference models. We assume that the deeper understanding of different patterns will support method development for implementation and use.", "title": "" }, { "docid": "neg:1840234_7", "text": "Purpose – Based on stimulus-organism-response model, the purpose of this paper is to develop an integrated model to explore the effects of six marketing-mix components (stimuli) on consumer loyalty (response) through consumer value (organism) in social commerce (SC). Design/methodology/approach – In order to target online social buyers, a web-based survey was employed. Structural equation modeling with partial least squares (PLS) is used to analyze valid data from 599 consumers who have repurchase experience via Facebook. Findings – The results from PLS analysis show that all components of SC marketing mix (SCMM) have significant effects on SC consumer value. Moreover, SC customer value positively influences SC customer loyalty (CL). Research limitations/implications – The data for this study are collected from Facebook only and the sample size is limited; thus, replication studies are needed to improve generalizability and data representativeness of the study. Moreover, longitudinal studies are needed to verify the causality among the constructs in the proposed research model. Practical implications – SC sellers should implement more effective SCMM strategies to foster SC CL through better SCMM decisions. Social implications – The SCMM components represent the collective benefits of social interaction, exemplifying the importance of effective communication and interaction among SC customers. Originality/value – This study develops a parsimonious model to explain the over-arching effects of SCMM components on CL in SC mediated by customer value. It confirms that utilitarian, hedonic, and social values can be applied to online SC and that SCMM can be leveraged to achieve these values.", "title": "" }, { "docid": "neg:1840234_8", "text": "Magnetic levitation (Maglev) is becoming a popular transportation topic all around the globe. Maglev systems have been successfully implemented in many applications such as frictionless bearings, high-speed maglev trains, and fast tool servo systems. Due to the features of the instability and nonlinearity of the maglev system, the design of controllers for a maglev system is a difficult task. Literature shows, many authors have proposed various controllers to stabilize the position of the object in the air. The main drawback of these controllers is that symmetric constraints like stability conditions, decay rate conditions, disturbance rejection and eddy-current based force due to the motion of the levitated object are not taken into consideration. In this paper, a linear matrix inequality based fuzzy controller is proposed to reduce vibration in the object. The efficacy of the proposed controller tests on a maglev system considering symmetric constraints and the eddy current based force.", "title": "" }, { "docid": "neg:1840234_9", "text": "Eyelid bags are the result of relaxation of lid structures like the skin, the orbicularis muscle, and mainly the septum, with subsequent protrusion or pseudo herniation of intraorbital fat contents. The logical treatment of baggy upper and lower eyelids should therefore include repositioning the herniated fat into the orbit and strengthening the attenuated septum in the form of a septorhaphy as a hernia repair. The preservation of orbital fat results in a more youthful appearance. The operative technique of the orbital septorhaphy is demonstrated for the upper and lower eyelid. A prospective series of 60 patients (50 upper and 90 lower blepharoplasties) with a maximum follow-up of 17 months were analyzed. Pleasing results were achieved in 56 patients. A partial recurrence was noted in 3 patients and widening of the palpebral fissure in 1 patient. Orbital septorhaphy for baggy eyelids is a rational, reliable procedure to correct the herniation of orbital fat in the upper and lower eyelids. Tightening of the orbicularis muscle and skin may be added as usual. The procedure is technically simple and without trauma to the orbital contents. The morbidity is minimal, the rate of complications is low, and the results are pleasing and reliable.", "title": "" }, { "docid": "neg:1840234_10", "text": "Conceptual modeling is one major topic in information systems research and becomes even more important with the arising of new software engineering principles like model driven architecture (MDA) or serviceoriented architectures (SOA). Research on conceptual modeling is characterized by a dilemma: Empirical research confirms that in practice conceptual modeling is often perceived as difficult and not done well. The application of reusable conceptual models is a promising approach to support model designers. At the same time, the IS research community claims for a sounder theoretical base for conceptual modeling. The design science research paradigm delivers a framework to fortify the theoretical foundation of research on conceptual models. We provide insights on how to achieve both, relevance and rigor, in conceptual modeling by identifying requirements for reusable conceptual models on the basis of the design science research paradigm.", "title": "" }, { "docid": "neg:1840234_11", "text": "BACKGROUND\nThe evidence base on the prevalence of dementia is expanding rapidly, particularly in countries with low and middle incomes. A reappraisal of global prevalence and numbers is due, given the significant implications for social and public policy and planning.\n\n\nMETHODS\nIn this study we provide a systematic review of the global literature on the prevalence of dementia (1980-2009) and metaanalysis to estimate the prevalence and numbers of those affected, aged ≥60 years in 21 Global Burden of Disease regions.\n\n\nRESULTS\nAge-standardized prevalence for those aged ≥60 years varied in a narrow band, 5%-7% in most world regions, with a higher prevalence in Latin America (8.5%), and a distinctively lower prevalence in the four sub-Saharan African regions (2%-4%). It was estimated that 35.6 million people lived with dementia worldwide in 2010, with numbers expected to almost double every 20 years, to 65.7 million in 2030 and 115.4 million in 2050. In 2010, 58% of all people with dementia lived in countries with low or middle incomes, with this proportion anticipated to rise to 63% in 2030 and 71% in 2050.\n\n\nCONCLUSION\nThe detailed estimates in this study constitute the best current basis for policymaking, planning, and allocation of health and welfare resources in dementia care. The age-specific prevalence of dementia varies little between world regions, and may converge further. Future projections of numbers of people with dementia may be modified substantially by preventive interventions (lowering incidence), improvements in treatment and care (prolonging survival), and disease-modifying interventions (preventing or slowing progression). All countries need to commission nationally representative surveys that are repeated regularly to monitor trends.", "title": "" }, { "docid": "neg:1840234_12", "text": "Determine blood type is essential before administering a blood transfusion, including in emergency situation. Currently, these tests are performed manually by technicians, which can lead to human errors. Various systems have been developed to automate these tests, but none is able to perform the analysis in time for emergency situations. This work aims to develop an automatic system to perform these tests in a short period of time, adapting to emergency situations. To do so, it uses the slide test and image processing techniques using the IMAQ Vision from National Instruments. The image captured after the slide test is processed and detects the occurrence of agglutination. Next the classification algorithm determines the blood type in analysis. Finally, all the information is stored in a database. Thus, the system allows determining the blood type in an emergency, eliminating transfusions based on the principle of universal donor and reducing transfusion reactions risks.", "title": "" }, { "docid": "neg:1840234_13", "text": "This study was undertaken to investigate the positive and negative effects of excessive Internet use on undergraduate students. The Internet Effect Scale (IES), especially constructed by the authors to determine these effects, consisted of seven dimensions namely: behavioral problems, interpersonal problems, educational problems, psychological problems, physical problems, Internet abuse, and positive effects. The sample consisted of 200 undergraduate students studying at the GC University Lahore, Pakistan. A set of Pearson Product Moment correlations showed positive associations between time spent on the Internet and various dimensions of the IES indicating that excessive Internet use can lead to a host of problems of educational, physical, psychological and interpersonal nature. However, a greater number of students reported positive than negative effects of Internet use. Without negating the advantages of Internet, the current findings suggest that Internet use should be within reasonable limits focusing more on activities enhancing one's productivity.", "title": "" }, { "docid": "neg:1840234_14", "text": "Entity matching (EM) is a critical part of data integration. We study how to synthesize entity matching rules from positive-negative matching examples. The core of our solution is program synthesis, a powerful tool to automatically generate rules (or programs) that satisfy a given highlevel specification, via a predefined grammar. This grammar describes a General Boolean Formula (GBF) that can include arbitrary attribute matching predicates combined by conjunctions ( Ź", "title": "" }, { "docid": "neg:1840234_15", "text": "The purification of recombinant proteins by affinity chromatography is one of the most efficient strategies due to the high recovery yields and purity achieved. However, this is dependent on the availability of specific affinity adsorbents for each particular target protein. The diversity of proteins to be purified augments the complexity and number of specific affinity adsorbents needed, and therefore generic platforms for the purification of recombinant proteins are appealing strategies. This justifies why genetically encoded affinity tags became so popular for recombinant protein purification, as these systems only require specific ligands for the capture of the fusion protein through a pre-defined affinity tag tail. There is a wide range of available affinity pairs \"tag-ligand\" combining biological or structural affinity ligands with the respective binding tags. This review gives a general overview of the well-established \"tag-ligand\" systems available for fusion protein purification and also explores current unconventional strategies under development.", "title": "" }, { "docid": "neg:1840234_16", "text": "Context: Architecture-centric software evolution (ACSE) enables changes in system’s structure and behaviour while maintaining a global view of the software to address evolution-centric trade-offs. The existing research and practices for ACSE primarily focus on design-time evolution and runtime adaptations to accommodate changing requirements in existing architectures. Objectives: We aim to identify, taxonomically classify and systematically compare the existing research focused on enabling or enhancing change reuse to support ACSE. Method: We conducted a systematic literature review of 32 qualitatively selected studies and taxonomically classified these studies based on solutions that enable (i) empirical acquisition and (ii) systematic application of architecture evolution reuse knowledge (AERK) to guide ACSE. Results: We identified six distinct research themes that support acquisition and application of AERK. We investigated (i) how evolution reuse knowledge is defined, classified and represented in the existing research to support ACSE and (ii) what are the existing methods, techniques and solutions to support empirical acquisition and systematic application of AERK. Conclusions: Change patterns (34% of selected studies) represent a predominant solution, followed by evolution styles (25%) and adaptation strategies and policies (22%) to enable application of reuse knowledge. Empirical methods for acquisition of reuse knowledge represent 19% including pattern discovery, configuration analysis, evolution and maintenance prediction techniques (approximately 6% each). A lack of focus on empirical acquisition of reuse knowledge suggests the need of solutions with architecture change mining as a complementary and integrated phase for architecture change execution. Copyright © 2014 John Wiley & Sons, Ltd. Received 13 May 2013; Revised 23 September 2013; Accepted 27 December 2013", "title": "" }, { "docid": "neg:1840234_17", "text": "The objective of this paper is to control the speed of Permanent Magnet Synchronous Motor (PMSM) over wide range of speed by consuming minimum time and low cost. Therefore, comparative performance analysis of PMSM on basis of speed regulation has been done in this study. Comparison of two control strategies i.e. Field oriented control (FOC) without sensor less Model Reference Adaptive System (MRAS) and FOC with sensor less MRAS has been carried out. Sensor less speed control of PMSM is achieved by using estimated speed deviation as feedback signal for the PI controller. Performance of the both control strategies has been evaluated in in MATLAB Simulink software. Simulation studies show the response of PMSM speed during various conditions of load and speed variations. Obtained results reveal that the proposed MRAS technique can effectively estimate the speed of rotor with high exactness and torque response is significantly quick as compared to the system without MRAS control system.", "title": "" }, { "docid": "neg:1840234_18", "text": "In this paper, a broadband high-power eight-way coaxial waveguide power combiner with axially symmetric structure is proposed. A combination of circuit model and full electromagnetic wave methods is used to simplify the design procedure by increasing the role of the circuit model and, in contrast, reducing the amount of full wave optimization. The presented structure is compact and easy to fabricate. Keeping its return loss greater than 12 dB, the constructed combiner operates within 112% bandwidth from 520 to 1860 MHz.", "title": "" }, { "docid": "neg:1840234_19", "text": "This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.", "title": "" } ]
1840235
ASTRO: A Datalog System for Advanced Stream Reasoning
[ { "docid": "pos:1840235_0", "text": "There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.", "title": "" }, { "docid": "pos:1840235_1", "text": "Weighted signed networks (WSNs) are networks in which edges are labeled with positive and negative weights. WSNs can capture like/dislike, trust/distrust, and other social relationships between people. In this paper, we consider the problem of predicting the weights of edges in such networks. We propose two novel measures of node behavior: the goodness of a node intuitively captures how much this node is liked/trusted by other nodes, while the fairness of a node captures how fair the node is in rating other nodes' likeability or trust level. We provide axioms that these two notions need to satisfy and show that past work does not meet these requirements for WSNs. We provide a mutually recursive definition of these two concepts and prove that they converge to a unique solution in linear time. We use the two measures to predict the edge weight in WSNs. Furthermore, we show that when compared against several individual algorithms from both the signed and unsigned social network literature, our fairness and goodness metrics almost always have the best predictive power. We then use these as features in different multiple regression models and show that we can predict edge weights on 2 Bitcoin WSNs, an Epinions WSN, 2 WSNs derived from Wikipedia, and a WSN derived from Twitter with more accurate results than past work. Moreover, fairness and goodness metrics form the most significant feature for prediction in most (but not all) cases.", "title": "" } ]
[ { "docid": "neg:1840235_0", "text": "where ri is the reward in cycle i of a given history, and the expected value is taken over all possible interaction histories of π and μ. The choice of γi is a subtle issue that controls how greedy or far sighted the agent should be. Here we use the near-harmonic γi := 1/i2 as this produces an agent with increasing farsightedness of the order of its current age [Hutter2004]. As we desire an extremely general definition of intelligence for arbitrary systems, our space of environments should be as large as possible. An obvious choice is the space of all probability measures, however this causes serious problems as we cannot even describe some of these measures in a finite way.", "title": "" }, { "docid": "neg:1840235_1", "text": "In most challenging data analysis applications, data evolve over time and must be analyzed in near real time. Patterns and relations in such data often evolve over time, thus, models built for analyzing such data quickly become obsolete over time. In machine learning and data mining this phenomenon is referred to as concept drift. The objective is to deploy models that would diagnose themselves and adapt to changing data over time. This chapter provides an application oriented view towards concept drift research, with a focus on supervised learning tasks. First we overview and categorize application tasks for which the problem of concept drift is particularly relevant. Then we construct a reference framework for positioning application tasks within a spectrum of problems related to concept drift. Finally, we discuss some promising research directions from the application perspective, and present recommendations for application driven concept drift research and development.", "title": "" }, { "docid": "neg:1840235_2", "text": "The issue of the variant vs. invariant in personality often arises in diVerent forms of the “person– situation” debate, which is based on a false dichotomy between the personal and situational determination of behavior. Previously reported data are summarized that demonstrate how behavior can vary as a function of subtle situational changes while individual consistency is maintained. Further discussion considers the personal source of behavioral invariance, the situational source of behavioral variation, the person–situation interaction, the nature of behavior, and the “personality triad” of persons, situations, and behaviors, in which each element is understood and predicted in terms of the other two. An important goal for future research is further development of theories and methods for conceptualizing and measuring the functional aspects of situations and of behaviors. One reason for the persistence of the person situation debate may be that it serves as a proxy for a deeper, implicit debate over values such as equality vs. individuality, determinism vs. free will, and Xexibility vs. consistency. However, these value dichotomies may be as false as the person–situation debate that they implicitly drive.  2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "neg:1840235_3", "text": "The Multi-task Cascaded Convolutional Networks (MTCNN) has recently demonstrated impressive results on jointly face detection and alignment. By using the hard sample ming and training a model on FER2013 datasets, we exploit the inherent correlation between face detection and facial express-ion recognition, and report the results of facial expression recognition based on MTCNN.", "title": "" }, { "docid": "neg:1840235_4", "text": "The Tor network relies on volunteer relay operators for relay bandwidth, which may limit its growth and scaling potential. We propose an incentive scheme for Tor relying on two novel concepts. We introduce TorCoin, an “altcoin” that uses the Bitcoin protocol to reward relays for contributing bandwidth. Relays “mine” TorCoins, then sell them for cash on any existing altcoin exchange. To verify that a given TorCoin represents actual bandwidth transferred, we introduce TorPath, a decentralized protocol for forming Tor circuits such that each circuit is privately-addressable but publicly verifiable. Each circuit’s participants may then collectively mine a limited number of TorCoins, in proportion to the end-to-end transmission goodput they measure on that circuit.", "title": "" }, { "docid": "neg:1840235_5", "text": "The aim of this study was to investigate the consequences of friend networking sites (e.g., Friendster, MySpace) for adolescents' self-esteem and well-being. We conducted a survey among 881 adolescents (10-19-year-olds) who had an online profile on a Dutch friend networking site. Using structural equation modeling, we found that the frequency with which adolescents used the site had an indirect effect on their social self-esteem and well-being. The use of the friend networking site stimulated the number of relationships formed on the site, the frequency with which adolescents received feedback on their profiles, and the tone (i.e., positive vs. negative) of this feedback. Positive feedback on the profiles enhanced adolescents' social self-esteem and well-being, whereas negative feedback decreased their self-esteem and well-being.", "title": "" }, { "docid": "neg:1840235_6", "text": "A portable civilian GPS spoofer is implemented on a digital signal processor and used to characterize spoofing effects and develop defenses against civilian spoofing. This work is intended to equip GNSS users and receiver manufacturers with authentication methods that are effective against unsophisticated spoofing attacks. The work also serves to refine the civilian spoofing threat assessment by demonstrating the challenges involved in mounting a spoofing attack.", "title": "" }, { "docid": "neg:1840235_7", "text": "A modified microstrip Franklin array antenna (MFAA) is proposed for short-range radar applications in vehicle blind spot information systems (BLIS). It is shown that the radiating performance [i.e., absolute gain value and half-power beamwidth (HPBW) angle] can be improved by increasing the number of radiators in the MFAA structure and assigning appropriate values to the antenna geometry parameters. The MFAA possesses a high absolute gain value (>10 dB), good directivity (HPBW <20°) in the E-plane, and large-range coverage (HPBW >80°) in the H-plane at an operating frequency of 24 GHz. Moreover, the 10-dB impedance bandwidth of the proposed antenna is around 250 MHz. The MFAA is, thus, an ideal candidate for automotive BLIS applications.", "title": "" }, { "docid": "neg:1840235_8", "text": "During the last years, agile methods like eXtreme Programming have become increasingly popular. Parallel to this, more and more organizations rely on process maturity models to assess and improve their own processes or those of suppliers, since it has been getting clear that most project failures can be imputed to inconsistent, undisciplined processes. Many organizations demand CMMI compliance of projects where agile methods are employed. In this situation it is necessary to analyze the interrelations and mutual restrictions between agile methods and approaches for software process analysis and improvement. This paper analyzes to what extent the CMMI process areas can be covered by XP and where adjustments of XP have to be made. Based on this, we describe the limitations of CMMI in an agile environment and show that level 4 or 5 are not feasible under the current specifications of CMMI and XP.", "title": "" }, { "docid": "neg:1840235_9", "text": "If the training dataset is not very large, image recognition is usually implemented with the transfer learning methods. In these methods the features are extracted using a deep convolutional neural network, which was preliminarily trained with an external very-large dataset. In this paper we consider the nonparametric classification of extracted feature vectors with the probabilistic neural network (PNN). The number of neurons at the pattern layer of the PNN is equal to the database size, which causes the low recognition performance and high memory space complexity of this network. We propose to overcome these drawbacks by replacing the exponential activation function in the Gaussian Parzen kernel to the complex exponential functions in the Fej\\'er kernel. We demonstrate that in this case it is possible to implement the network with the number of neurons in the pattern layer proportional to the cubic root of the database size. Thus, the proposed modification of the PNN makes it possible to significantly decrease runtime and memory complexities without loosing its main advantages, namely, extremely fast training procedure and the convergence to the optimal Bayesian decision. An experimental study in visual object category classification and unconstrained face recognition with contemporary deep neural networks have shown, that our approach obtains very efficient and rather accurate decisions for the small training sample in comparison with the well-known classifiers.", "title": "" }, { "docid": "neg:1840235_10", "text": "The automated analysis of facial expressions has been widely used in different research areas, such as biometrics or emotional analysis. Special importance is attached to facial expressions in the area of sign language, since they help to form the grammatical structure of the language and allow for the creation of language disambiguation, and thus are called Grammatical Facial Expressions (GFEs). In this paper we outline the recognition of GFEs used in the Brazilian Sign Language. In order to reach this objective, we have captured nine types of GFEs using a KinectTMsensor, designed a spatial-temporal data representation, modeled the research question as a set of binary classification problems, and employed a Machine Learning technique.", "title": "" }, { "docid": "neg:1840235_11", "text": "This paper defines presence in terms of frames and involvement [1]. The value of this analysis of presence is demonstrated by applying it to several issues that have been raised about presence: residual awareness of nonmediation, imaginary presence, presence as categorical or continuum, and presence breaks. The paper goes on to explore the relationship between presence and reality. Goffman introduced frames to try to answer the question, “Under what circumstances do we think things real?” Under frame analysis there are three different conditions under which things are considered unreal, these are explained and related to the experience of presence. Frame analysis is used to show why virtual environments are not usually considered to be part of reality, although the virtual spaces of phone interaction are considered real. The analysis also yields practical suggestions for extending presence within virtual environments. Keywords--presence, frames, virtual environments, mobile phones, Goffman.", "title": "" }, { "docid": "neg:1840235_12", "text": "Soft pneumatic actuators (SPAs) are versatile robotic components enabling diverse and complex soft robot hardware design. However, due to inherent material characteristics exhibited by their primary constitutive material, silicone rubber, they often lack robustness and repeatability in performance. In this article, we present a novel SPA-based bending module design with shell reinforcement. The bidirectional soft actuator presented here is enveloped in a Yoshimura patterned origami shell, which acts as an additional protection layer covering the SPA while providing specific bending resilience throughout the actuator’s range of motion. Mechanical tests are performed to characterize several shell folding patterns and their effect on the actuator performance. Details on design decisions and experimental results using the SPA with origami shell modules and performance analysis are presented; the performance of the bending module is significantly enhanced when reinforcement is provided by the shell. With the aid of the shell, the bending module is capable of sustaining higher inflation pressures, delivering larger blocked torques, and generating the targeted motion trajectory.", "title": "" }, { "docid": "neg:1840235_13", "text": "This research explores a Natural Language Processing technique utilized for the automatic reduction of melodies: the Probabilistic Context-Free Grammar (PCFG). Automatic melodic reduction was previously explored by means of a probabilistic grammar [11] [1]. However, each of these methods used unsupervised learning to estimate the probabilities for the grammar rules, and thus a corpusbased evaluation was not performed. A dataset of analyses using the Generative Theory of Tonal Music (GTTM) exists [13], which contains 300 Western tonal melodies and their corresponding melodic reductions in tree format. In this work, supervised learning is used to train a PCFG for the task of melodic reduction, using the tree analyses provided by the GTTM dataset. The resulting model is evaluated on its ability to create accurate reduction trees, based on a node-by-node comparison with ground-truth trees. Multiple data representations are explored, and example output reductions are shown. Motivations for performing melodic reduction include melodic identification and similarity, efficient storage of melodies, automatic composition, variation matching, and automatic harmonic analysis.", "title": "" }, { "docid": "neg:1840235_14", "text": "We present an intriguing property of visual data that we observe in our attempt to isolate the influence of data for learning a visual representation. We observe that we can get better performance than existing model by just conditioning the existing representation on a million unlabeled images without any extra knowledge. As a by-product of this study, we achieve results better than prior state-of-theart for surface normal estimation on NYU-v2 depth dataset, and improved results for semantic segmentation using a selfsupervised representation on PASCAL-VOC 2012 dataset.", "title": "" }, { "docid": "neg:1840235_15", "text": "In this paper, we present a novel meta-feature generation method in the context of meta-learning, which is based on rules that compare the performance of individual base learners in a one-against-one manner. In addition to these new meta-features, we also introduce a new meta-learner called Approximate Ranking Tree Forests (ART Forests) that performs very competitively when compared with several state-of-the-art meta-learners. Our experimental results are based on a large collection of datasets and show that the proposed new techniques can improve the overall performance of meta-learning for algorithm ranking significantly. A key point in our approach is that each performance figure of any base learner for any specific dataset is generated by optimising the parameters of the base learner separately for each dataset.", "title": "" }, { "docid": "neg:1840235_16", "text": "Colorization methods using deep neural networks have become a recent trend. However, most of them do not allow user inputs, or only allow limited user inputs (only global inputs or only local inputs), to control the output colorful images. The possible reason is that it’s difficult to differentiate the influence of different kind of user inputs in network training. To solve this problem, we present a novel deep colorization method, which allows simultaneous global and local inputs to better control the output colorized images. The key step is to design an appropriate loss function that can differentiate the influence of input data, global inputs and local inputs. With this design, our method accepts no inputs, or global inputs, or local inputs, or both global and local inputs, which is not supported in previous deep colorization methods. In addition, we propose a global color theme recommendation system to help users determine global inputs. Experimental results shows that our methods can better control the colorized images and generate state-of-art results.", "title": "" }, { "docid": "neg:1840235_17", "text": "Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two-stage cascaded approach-pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a (mean  ±  std. dev.) Dice similarity coefficient (DSC) of 81.27 ± 6.27% in validation, which significantly outperforms both a previous state-of-the art method and a preliminary version of this work that report DSCs of 71.80 ± 10.70% and 78.01 ± 8.20%, respectively, using the same dataset.", "title": "" }, { "docid": "neg:1840235_18", "text": "Data Mining in medicine is an emerging field of great importance to provide a prognosis and deeper understanding of disease classification, specifically in Mental Health areas. The main objective of this paper is to present a review of the existing research works in the literature, referring to the techniques and algorithms of Data Mining in Mental Health, specifically in the most prevalent diseases such as: Dementia, Alzheimer, Schizophrenia and Depression. Academic databases that were used to perform the searches are Google Scholar, IEEE Xplore, PubMed, Science Direct, Scopus and Web of Science, taking into account as date of publication the last 10 years, from 2008 to the present. Several search criteria were established such as ‘techniques’ AND ‘Data Mining’ AND ‘Mental Health’, ‘algorithms’ AND ‘Data Mining’ AND ‘dementia’ AND ‘schizophrenia’ AND ‘depression’, etc. selecting the papers of greatest interest. A total of 211 articles were found related to techniques and algorithms of Data Mining applied to the main Mental Health diseases. 72 articles have been identified as relevant works of which 32% are Alzheimer’s, 22% dementia, 24% depression, 14% schizophrenia and 8% bipolar disorders. Many of the papers show the prediction of risk factors in these diseases. From the review of the research articles analyzed, it can be said that use of Data Mining techniques applied to diseases such as dementia, schizophrenia, depression, etc. can be of great help to the clinical decision, diagnosis prediction and improve the patient’s quality of life.", "title": "" } ]
1840236
Deep packet inspection using Cuckoo filter
[ { "docid": "pos:1840236_0", "text": "Deep Packet Inspection (DPI) is the state-of-the-art technology for traffic classification. According to the conventional wisdom, DPI is the most accurate classification technique. Consequently, most popular products, either commercial or open-source, rely on some sort of DPI for traffic classification. However, the actual performance of DPI is still unclear to the research community, since the lack of public datasets prevent the comparison and reproducibility of their results. This paper presents a comprehensive comparison of 6 well-known DPI tools, which are commonly used in the traffic classification literature. Our study includes 2 commercial products (PACE and NBAR) and 4 open-source tools (OpenDPI, L7-filter, nDPI, and Libprotoident). We studied their performance in various scenarios (including packet and flow truncation) and at different classification levels (application protocol, application and web service). We carefully built a labeled dataset with more than 750 K flows, which contains traffic from popular applications. We used the Volunteer-Based System (VBS), developed at Aalborg University, to guarantee the correct labeling of the dataset. We released this dataset, including full packet payloads, to the research community. We believe this dataset could become a common benchmark for the comparison and validation of network traffic classifiers. Our results present PACE, a commercial tool, as the most accurate solution. Surprisingly, we find that some open-source tools, such as nDPI and Libprotoident, also achieve very high accuracy.", "title": "" }, { "docid": "pos:1840236_1", "text": "The emergence of SDNs promises to dramatically simplify network management and enable innovation through network programmability. Despite all the hype surrounding SDNs, exploiting its full potential is demanding. Security is still the key concern and is an equally striking challenge that reduces the growth of SDNs. Moreover, the deployment of novel entities and the introduction of several architectural components of SDNs pose new security threats and vulnerabilities. Besides, the landscape of digital threats and cyber-attacks is evolving tremendously, considering SDNs as a potential target to have even more devastating effects than using simple networks. Security is not considered as part of the initial SDN design; therefore, it must be raised on the agenda. This article discusses the state-of-the-art security solutions proposed to secure SDNs. We classify the security solutions in the literature by presenting a thematic taxonomy based on SDN layers/interfaces, security measures, simulation environments, and security objectives. Moreover, the article points out the possible attacks and threat vectors targeting different layers/interfaces of SDNs. The potential requirements and their key enablers for securing SDNs are also identified and presented. Also, the article gives great guidance for secure and dependable SDNs. Finally, we discuss open issues and challenges of SDN security that may be deemed appropriate to be tackled by researchers and professionals in the future.", "title": "" }, { "docid": "pos:1840236_2", "text": "As the popularity of software-defined networks (SDN) and OpenFlow increases, policy-driven network management has received more attention. Manual configuration of multiple devices is being replaced by an automated approach where a software-based, network-aware controller handles the configuration of all network devices. Software applications running on top of the network controller provide an abstraction of the topology and facilitate the task of operating the network. We propose OpenSec, an OpenFlow-based security framework that allows a network security operator to create and implement security policies written in human-readable language. Using OpenSec, the user can describe a flow in terms of OpenFlow matching fields, define which security services must be applied to that flow (deep packet inspection, intrusion detection, spam detection, etc.) and specify security levels that define how OpenSec reacts if malicious traffic is detected. In this paper, we first provide a more detailed explanation of how OpenSec converts security policies into a series of OpenFlow messages needed to implement such a policy. Second, we describe how the framework automatically reacts to security alerts as specified by the policies. Third, we perform additional experiments on the GENI testbed to evaluate the scalability of the proposed framework using existing datasets of campus networks. Our results show that up to 95% of attacks in an existing data set can be detected and 99% of malicious source nodes can be blocked automatically. Furthermore, we show that our policy specification language is simpler while offering fast translation times compared to existing solutions.", "title": "" } ]
[ { "docid": "neg:1840236_0", "text": "This investigation is one in a series of studies that address the possibility of stroke rehabilitation using robotic devices to facilitate “adaptive training.” Healthy subjects, after training in the presence of systematically applied forces, typically exhibit a predictable “after-effect.” A critical question is whether this adaptive characteristic is preserved following stroke so that it might be exploited for restoring function. Another important question is whether subjects benefit more from training forces that enhance their errors than from forces that reduce their errors. We exposed hemiparetic stroke survivors and healthy age-matched controls to a pattern of disturbing forces that have been found by previous studies to induce a dramatic adaptation in healthy individuals. Eighteen stroke survivors made 834 movements in the presence of a robot-generated force field that pushed their hands proportional to its speed and perpendicular to its direction of motion — either clockwise or counterclockwise. We found that subjects could adapt, as evidenced by significant after-effects. After-effects were not correlated with the clinical scores that we used for measuring motor impairment. Further examination revealed that significant improvements occurred only when the training forces magnified the original errors, and not when the training forces reduced the errors or were zero. Within this constrained experimental task we found that error-enhancing therapy (as opposed to guiding the limb closer to the correct path) to be more effective than therapy that assisted the subject.", "title": "" }, { "docid": "neg:1840236_1", "text": "Semi-supervised classification has become an active topic recently and a number of algorithms, such as Self-training, have been proposed to improve the performance of supervised classification using unlabeled data. In this paper, we propose a semi-supervised learning framework which combines clustering and classification. Our motivation is that clustering analysis is a powerful knowledge-discovery tool and it may clustering is integrated into Self-training classification to help train a better classifier. In particular, the semi-supervised fuzzy c-means algorithm and support vector machines are used for clustering and classification, respectively. Experimental results on artificial and real datasets demonstrate the advantages of the proposed framework. & 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840236_2", "text": "oastal areas, the place where the waters of the seas meet the land are indeed unique places in our global geography. They are endowed with a very wide range of coastal ecosystems like mangroves, coral reefs, lagoons, sea grass, salt marsh, estuary etc. They are unique in a very real economic sense as sites for port and harbour facilities that capture the large monetary benefits associated with waterborne commerce and are highly valued and greatly attractive as sites for resorts and as vacation destinations. The combination of freshwater and salt water in coastal estuaries creates some of the most productive and richest habitats on earth; the resulting bounty in fishes and other marine life can be of great value to coastal nations. In many locations, the coastal topography formed over the millennia provides significant protection from hurricanes, typhoons, and other ocean related disturbances. But these values could diminish or even be lost, if they are not managed. Pollution of coastal waters can greatly reduce the production of fish, as can degradation of coastal nursery grounds and other valuable wetland habitats. The storm protection afforded by fringing reefs and mangrove forests can be lost if the corals die or the mangroves removed. Inappropriate development and accompanying despoilment can reduce the attractiveness of the coastal environment, greatly affecting tourism potential. Even ports and harbours require active management if they are to remain productive and successful over the long term. Coastal ecosystem management is thus immensely important for the sustainable use, development and protection of the coastal and marine areas and resources. To achieve this, an understanding of the coastal processes that influence the coastal environments and the ways in which they interact is necessary. It is advantageous to adopt a holistic or systematic approach for solving the coastal problems, since understanding the processes and products of interaction in coastal environments is very complicated. A careful assessment of changes that occur in the coastal C", "title": "" }, { "docid": "neg:1840236_3", "text": "Biodegradation of two superabsorbent polymers, a crosslinked, insoluble polyacrylate and an insoluble polyacrylate/ polyacrylamide copolymer, in soil by the white-rot fungus, Phanerochaete chrysosporium was investigated. The polymers were both solubilized and mineralized by the fungus but solubilization and mineralization of the copolymer was much more rapid than of the polyacrylate. Soil microbes poorly solublized the polymers and were unable to mineralize either intact polymer. However, soil microbes cooperated with the fungus during polymer degradation in soil, with the fungus solubilizing the polymers and the soil microbes stimulating mineralization. Further, soil microbes were able to significantly mineralize both polymers after solubilization by P. chrysosporium grown under conditions that produced fungal peroxidases or cellobiose dehydrogenase, or after solubilization by photochemically generated Fenton reagent. The results suggest that biodegradation of these polymers in soil is best under conditions that maximize solubilization.", "title": "" }, { "docid": "neg:1840236_4", "text": "We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.", "title": "" }, { "docid": "neg:1840236_5", "text": "This paper provides an introduction to specifying multilevel models using PROC MIXED. After a brief introduction to the field of multilevel modeling, users are provided with concrete examples of how PROC MIXED can be used to estimate (a) two-level organizational models, (b) two-level growth models, and (c) three-level organizational models. Both random intercept and random intercept and slope models are illustrated. Examples are shown using different real world data sources, including the publically available Early Childhood Longitudinal Study–Kindergarten cohort data. For each example, different research questions are examined through both narrative explanations and examples of the PROC MIXED code and corresponding output.", "title": "" }, { "docid": "neg:1840236_6", "text": "Emerging applications face the need to store and analyze interconnected data that are naturally depicted as graphs. Recent proposals take the idea of data cubes that have been successfully applied to multidimensional data and extend them to work for interconnected datasets. In our work we revisit the graph cube framework and propose novel mechanisms inspired from information theory in order to help the analyst quickly locate interesting relationships within the rich information contained in the graph cube. The proposed entropy-based filtering of data reveals irregularities and non-uniformity which are often what the decision maker is looking for. We experimentally validate our techniques and demonstrate that the proposed entropy-based filtering can help eliminate large portions of the respective graph cubes.", "title": "" }, { "docid": "neg:1840236_7", "text": "Tree-structured data are becoming ubiquitous nowadays and manipulating them based on similarity is essential for many applications. The generally accepted similarity measure for trees is the edit distance. Although similarity search has been extensively studied, searching for similar trees is still an open problem due to the high complexity of computing the tree edit distance. In this paper, we propose to transform tree-structured data into an approximate numerical multidimensional vector which encodes the original structure information. We prove that the L1 distance of the corresponding vectors, whose computational complexity is O(|T1| + |T2|), forms a lower bound for the edit distance between trees. Based on the theoretical analysis, we describe a novel algorithm which embeds the proposed distance into a filter-and-refine framework to process similarity search on tree-structured data. The experimental results show that our algorithm reduces dramatically the distance computation cost. Our method is especially suitable for accelerating similarity query processing on large trees in massive datasets.", "title": "" }, { "docid": "neg:1840236_8", "text": "We investigate the application of non-orthogonal multiple access (NOMA) with successive interference cancellation (SIC) in downlink multiuser multiple-input multiple-output (MIMO) cellular systems, where the total number of receive antennas at user equipment (UE) ends in a cell is more than the number of transmit antennas at the base station (BS). We first dynamically group the UE receive antennas into a number of clusters equal to or more than the number of BS transmit antennas. A single beamforming vector is then shared by all the receive antennas in a cluster. We propose a linear beamforming technique in which all the receive antennas can significantly cancel the inter-cluster interference. On the other hand, the receive antennas in each cluster are scheduled on the power domain NOMA basis with SIC at the receiver ends. For inter-cluster and intra-cluster power allocation, we provide dynamic power allocation solutions with an objective to maximizing the overall cell capacity. An extensive performance evaluation is carried out for the proposed MIMO-NOMA system and the results are compared with those for conventional orthogonal multiple access (OMA)-based MIMO systems and other existing MIMO-NOMA solutions. The numerical results quantify the capacity gain of the proposed MIMO-NOMA model over MIMO-OMA and other existing MIMO-NOMA solutions.", "title": "" }, { "docid": "neg:1840236_9", "text": "Twitter provides search services to help people find new users to follow by recommending popular users or their friends' friends. However, these services do not offer the most relevant users to follow for a user. Furthermore, Twitter does not provide yet the search services to find the most interesting tweet messages for a user either. In this paper, we propose TWITOBI, a recommendation system for Twitter using probabilistic modeling for collaborative filtering which can recommend top-K users to follow and top-K tweets to read for a user. Our novel probabilistic model utilizes not only tweet messages but also the relationships between users. We develop an estimation algorithm for learning our model parameters and present its parallelized algorithm using MapReduce to handle large data. Our performance study with real-life data sets confirms the effectiveness and scalability of our algorithms.", "title": "" }, { "docid": "neg:1840236_10", "text": "Human group activities detection in multi-camera CCTV surveillance videos is a pressing demand on smart surveillance. Previous works on this topic are mainly based on camera topology inference that is hard to apply to real-world unconstrained surveillance videos. In this paper, we propose a new approach for multi-camera group activities detection. Our approach simultaneously exploits intra-camera and inter-camera contexts without topology inference. Specifically, a discriminative graphical model with hidden variables is developed. The intra-camera and inter-camera contexts are characterized by the structure of hidden variables. By automatically optimizing the structure, the contexts are effectively explored. Furthermore, we propose a new spatiotemporal feature, named vigilant area (VA), to characterize the quantity and appearance of the motion in an area. This feature is effective for group activity representation and is easy to extract from a dynamic and crowded scene. We evaluate the proposed VA feature and discriminative graphical model extensively on two real-world multi-camera surveillance video data sets, including a public corpus consisting of 2.5 h of videos and a 468-h video collection, which, to the best of our knowledge, is the largest video collection ever used in human activity detection. The experimental results demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "neg:1840236_11", "text": "The Web provides a fertile ground for word-of-mouth communication and more and more consumers write about and share product-related experiences online. Given the experiential nature of tourism, such first-hand knowledge communicated by other travelers is especially useful for travel decision making. However, very little is known about what motivates consumers to write online travel reviews. A Web-based survey using an online consumer panel was conducted to investigate consumers’ motivations to write online travel reviews. Measurement scales to gauge the motivations to contribute online travel reviews were developed and tested. The results indicate that online travel review writers are mostly motivated by helping a travel service provider, concerns for other consumers, and needs for enjoyment/positive self-enhancement. Venting negative feelings through postings is clearly not seen as an important motive. Motivational differences were found for gender and income level. Implications of the findings for online travel communities and tourism marketers are discussed.", "title": "" }, { "docid": "neg:1840236_12", "text": "We address the problem of sharpness enhancement of images. Existing hierarchical techniques that decompose an image into a smooth image and high frequency components based on Gaussian filter and bilateral filter suffer from halo effects, whereas techniques based on weighted least squares extract low contrast features as detail. Other techniques require multiple images and are not tolerant to noise.", "title": "" }, { "docid": "neg:1840236_13", "text": "This work presents an approach for estimating the effect of the fractional-N phase locked loop (Frac-N PLL) phase noise profile on frequency modulated continuous wave (FMCW) radar precision. Unlike previous approaches, the proposed modelling method takes the actual shape of the phase noise profile into account leading to insights on the main regions dominating the precision. Estimates from the proposed model are in very good agreement with statistical simulations and measurement results from an FMCW radar test chip fabricated on an IBM7WL BiCMOS 0.18 μm technology. At 5.8 GHz center frequency, a close-in phase noise of −75 dBc/Hz at 1 kHz offset is measured. A root mean squared (RMS) chirp nonlinearity error of 14.6 kHz and a ranging precision of 0.52 cm are achieved which competes with state-of-the-art FMCW secondary radars.", "title": "" }, { "docid": "neg:1840236_14", "text": "We developed a novel computational framework to predict the perceived trustworthiness of host profile texts in the context of online lodging marketplaces. To achieve this goal, we developed a dataset of 4,180 Airbnb host profiles annotated with perceived trustworthiness. To the best of our knowledge, the dataset along with our models allow for the first computational evaluation of perceived trustworthiness of textual profiles, which are ubiquitous in online peer-to-peer marketplaces. We provide insights into the linguistic factors that contribute to higher and lower perceived trustworthiness for profiles of different lengths.", "title": "" }, { "docid": "neg:1840236_15", "text": "Many companies have developed strategies that include investing heavily in information technology (IT) in order to enhance their performance. Yet, this investment pays off for some companies but not others. This study proposes that organization learning plays a significant role in determining the outcomes of IT. Drawing from resource theory and IT literature, the authors develop the concept of IT competency. Using structural equations modeling with data collected from managers in 271 manufacturing firms, they show that organizational learning plays a significant role in mediating the effects of IT competency on firm performance. Copyright  2003 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "neg:1840236_16", "text": "Currently, we are witnessing a growing trend in the study and application of problems in the framework of Big Data. This is mainly due to the great advantages which come from the knowledge extraction from a high volume of information. For this reason, we observe a migration of the standard Data Mining systems towards a new functional paradigm that allows at working with Big Data. By means of the MapReduce model and its different extensions, scalability can be successfully addressed, while maintaining a good fault tolerance during the execution of the algorithms. Among the different approaches used in Data Mining, those models based on fuzzy systems stand out for many applications. Among their advantages, we must stress the use of a representation close to the natural language. Additionally, they use an inference model that allows a good adaptation to different scenarios, especially those with a given degree of uncertainty. Despite the success of this type of systems, their migration to the Big Data environment in the different learning areas is at a preliminary stage yet. In this paper, we will carry out an overview of the main existing proposals on the topic, analyzing the design of these models. Additionally, we will discuss those problems related to the data distribution and parallelization of the current algorithms, and also its relationship with the fuzzy representation of the information. Finally, we will provide our view on the expectations for the future in this framework according to the design of those methods based on fuzzy sets, as well as the open challenges on the topic.", "title": "" }, { "docid": "neg:1840236_17", "text": "The increasing development of educational games as Learning Objects is an important resource for educators to motivate and engage students in several online courses using Learning Management Systems. Students learn more, and enjoy themselves more when they are actively involved, rather than just passive listeners. This article has the aim to discuss about how games may motivate learners, enhancing the teaching-learning process as well as assist instructor designers to develop educational games in order to engage students, based on theories of motivational design and emotional engagement. Keywords— learning objects; games; Learning Management Systems, motivational design, emotional engagement.", "title": "" }, { "docid": "neg:1840236_18", "text": "For two-class discrimination, Ref. [1] claimed that, when covariance matrices of the two classes were unequal, a (class) unbalanced dataset had a negative effect on the performance of linear discriminant analysis (LDA). Through re-balancing 10 realworld datasets, Ref. [1] provided empirical evidence to support the claim using AUC (Area Under the receiver operating characteristic Curve) as the performance metric. We suggest that such a claim is vague if not misleading, there is no solid theoretical analysis presented in [1], and AUC can lead to a quite different conclusion from that led to by misclassification error rate (ER) on the discrimination performance of LDA for unbalanced datasets. Our empirical and simulation studies suggest that, for LDA, the increase of the median of AUC (and thus the improvement of performance of LDA) from re-balancing is relatively small, while, in contrast, the increase of the median of ER (and thus the decline in performance of LDA) from re-balancing is relatively large. Therefore, from our study, there is no reliable empirical evidence to support the claim that a (class) unbalanced data set has a negative effect on the performance of LDA. In addition, re-balancing affects the performance of LDA for datasets with either equal or unequal covariance matrices, indicating that having unequal covariance matrices is not a key reason for the difference in performance between original and re-balanced data.", "title": "" }, { "docid": "neg:1840236_19", "text": "In the new interconnected world, we need to secure vehicular cyber-physical systems (VCPS) using sophisticated intrusion detection systems. In this article, we present a novel distributed intrusion detection system (DIDS) designed for a vehicular ad hoc network (VANET). By combining static and dynamic detection agents, that can be mounted on central vehicles, and a control center where the alarms about possible attacks on the system are communicated, the proposed DIDS can be used in both urban and highway environments for real time anomaly detection with good accuracy and response time.", "title": "" } ]
1840237
Falcon: Scaling Up Hands-Off Crowdsourced Entity Matching to Build Cloud Services
[ { "docid": "pos:1840237_0", "text": "Analysts report spending upwards of 80% of their time on problems in data cleaning. The data cleaning process is inherently iterative, with evolving cleaning workflows that start with basic exploratory data analysis on small samples of dirty data, then refine analysis with more sophisticated/expensive cleaning operators (i.e., crowdsourcing), and finally apply the insights to a full dataset. While an analyst often knows at a logical level what operations need to be done, they often have to manage a large search space of physical operators and parameters. We present Wisteria, a system designed to support the iterative development and optimization of data cleaning workflows, especially ones that utilize the crowd. Wisteria separates logical operations from physical implementations, and driven by analyst feedback, suggests optimizations and/or replacements to the analyst’s choice of physical implementation. We highlight research challenges in sampling, in-flight operator replacement, and crowdsourcing. We overview the system architecture and these techniques, then propose a demonstration designed to showcase how Wisteria can improve iterative data analysis and cleaning. The code is available at: http://www.sampleclean.org.", "title": "" }, { "docid": "pos:1840237_1", "text": "String similarity search and join are two important operations in data cleaning and integration, which extend traditional exact search and exact join operations in databases by tolerating the errors and inconsistencies in the data. They have many real-world applications, such as spell checking, duplicate detection, entity resolution, and webpage clustering. Although these two problems have been extensively studied in the recent decade, there is no thorough survey. In this paper, we present a comprehensive survey on string similarity search and join. We first give the problem definitions and introduce widely-used similarity functions to quantify the similarity. We then present an extensive set of algorithms for string similarity search and join. We also discuss their variants, including approximate entity extraction, type-ahead search, and approximate substring matching. Finally, we provide some open datasets and summarize some research challenges and open problems.", "title": "" } ]
[ { "docid": "neg:1840237_0", "text": "Social Networking Sites (SNS), such as Facebook and LinkedIn, have become the established place for keeping contact with old friends and meeting new acquaintances. As a result, a user leaves a big trail of personal information about him and his friends on the SNS, sometimes even without being aware of it. This information can lead to privacy drifts such as damaging his reputation and credibility, security risks (for instance identity theft) and profiling risks. In this paper, we first highlight some privacy issues raised by the growing development of SNS and identify clearly three privacy risks. While it may seem a priori that privacy and SNS are two antagonist concepts, we also identified some privacy criteria that SNS could fulfill in order to be more respectful of the privacy of their users. Finally, we introduce the concept of a Privacy-enhanced Social Networking Site (PSNS) and we describe Privacy Watch, our first implementation of a PSNS.", "title": "" }, { "docid": "neg:1840237_1", "text": "Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maximum likelihood and teacher forcing. These methods are well-suited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.", "title": "" }, { "docid": "neg:1840237_2", "text": "OBJECTIVE\nTo assess adherence to community-based directly observed treatment (DOT) among Tanzanian tuberculosis patients using the Medication Event Monitoring System (MEMS) and to validate alternative adherence measures for resource-limited settings using MEMS as a gold standard.\n\n\nMETHODS\nThis was a longitudinal pilot study of 50 patients recruited consecutively from one rural hospital, one urban hospital and two urban health centres. Treatment adherence was monitored with MEMS and the validity of the following adherence measures was assessed: isoniazid urine test, urine colour test, Morisky scale, Brief Medication Questionnaire, adapted AIDS Clinical Trials Group (ACTG) adherence questionnaire, pill counts and medication refill visits.\n\n\nFINDINGS\nThe mean adherence rate in the study population was 96.3% (standard deviation, SD: 7.7). Adherence was less than 100% in 70% of the patients, less than 95% in 21% of them, and less than 80% in 2%. The ACTG adherence questionnaire and urine colour test had the highest sensitivities but lowest specificities. The Morisky scale and refill visits had the highest specificities but lowest sensitivities. Pill counts and refill visits combined, used in routine practice, yielded moderate sensitivity and specificity, but sensitivity improved when the ACTG adherence questionnaire was added.\n\n\nCONCLUSION\nPatients on community-based DOT showed good adherence in this study. The combination of pill counts, refill visits and the ACTG adherence questionnaire could be used to monitor adherence in settings where MEMS is not affordable. The findings with regard to adherence and to the validity of simple adherence measures should be confirmed in larger populations with wider variability in adherence rates.", "title": "" }, { "docid": "neg:1840237_3", "text": "Bluetooth Low Energy (BLE), a low-power wireless protocol, is widely used in industrial automation for monitoring field devices. Although the BLE standard defines advanced security mechanisms, there are known security attacks for BLE and BLE-enabled field devices must be tested thoroughly against these attacks. This article identifies the possible attacks for BLE-enabled field devices relevant for industrial automation. It also presents a framework for defining and executing BLE security attacks and evaluates it on three BLE devices. All tested devices are vulnerable and this confirms that there is a need for better security testing tools as well as for additional defense mechanisms for BLE devices.", "title": "" }, { "docid": "neg:1840237_4", "text": "In a web environment, one of the most evolving application is those with recommendation system (RS). It is a subset of information filtering systems wherein, information about certain products or services or a person are categorized and are recommended for the concerned individual. Most of the authors designed collaborative movie recommendation system by using K-NN and K-means but due to a huge increase in movies and users quantity, the neighbour selection is getting more problematic. We propose a hybrid model based on movie recommender system which utilizes type division method and classified the types of the movie according to users which results reduce computation complexity. K-Means provides initial parameters to particle swarm optimization (PSO) so as to improve its performance. PSO provides initial seed and optimizes fuzzy c-means (FCM), for soft clustering of data items (users), instead of strict clustering behaviour in K-Means. For proposed model, we first adopted type division method to reduce the dense multidimensional data space. We looked up for techniques, which could give better results than K-Means and found FCM as the solution. Genetic algorithm (GA) has the limitation of unguided mutation. Hence, we used PSO. In this article experiment performed on Movielens dataset illustrated that the proposed model may deliver high performance related to veracity, and deliver more predictable and personalized recommendations. When compared to already existing methods and having 0.78 mean absolute error (MAE), our result is 3.503 % better with 0.75 as the MAE, showed that our approach gives improved results.", "title": "" }, { "docid": "neg:1840237_5", "text": "Scaling down to deep submicrometer (DSM) technology has made noise a metric of equal importance as compared to power, speed, and area. Smaller feature size, lower supply voltage, and higher frequency are some of the characteristics for DSM circuits that make them more vulnerable to noise. New designs and circuit techniques are required in order to achieve robustness in presence of noise. Novel methodologies for designing energy-efficient noise-tolerant exclusive-OR-exclusive- NOR circuits that can operate at low-supply voltages with good signal integrity and driving capability are proposed. The circuits designed, after applying the proposed methodologies, are characterized and compared with previously published circuits for reliability, speed and energy efficiency. To test the driving capability of the proposed circuits, they are embedded in an existing 5-2 compressor design. The average noise threshold energy (ANTE) is used for quantifying the noise immunity of the proposed circuits. Simulation results show that, compared with the best available circuit in literature, the proposed circuits exhibit better noise-immunity, lower power-delay product (PDP) and good driving capability. All of the proposed circuits prove to be faster and successfully work at all ranges of supply voltage starting from 3.3 V down to 0.6 V. The savings in the PDP range from 94% to 21% for the given supply voltage range respectively and the average improvement in the ANTE is 2.67X.", "title": "" }, { "docid": "neg:1840237_6", "text": "We introduce an effective technique to enhance night-time hazy scenes. Our technique builds on multi-scale fusion approach that use several inputs derived from the original image. Inspired by the dark-channel [1] we estimate night-time haze computing the airlight component on image patch and not on the entire image. We do this since under night-time conditions, the lighting generally arises from multiple artificial sources, and is thus intrinsically non-uniform. Selecting the size of the patches is non-trivial, since small patches are desirable to achieve fine spatial adaptation to the atmospheric light, this might also induce poor light estimates and reduced chance of capturing hazy pixels. For this reason, we deploy multiple patch sizes, each generating one input to a multiscale fusion process. Moreover, to reduce the glowing effect and emphasize the finest details, we derive a third input. For each input, a set of weight maps are derived so as to assign higher weights to regions of high contrast, high saliency and small saturation. Finally the derived inputs and the normalized weight maps are blended in a multi-scale fashion using a Laplacian pyramid decomposition. The experimental results demonstrate the effectiveness of our approach compared with recent techniques both in terms of computational efficiency and quality of the outputs.", "title": "" }, { "docid": "neg:1840237_7", "text": "Subgingival margins are often required for biologic, mechanical, or esthetic reasons. Several investigations have demonstrated that their use is associated with adverse periodontal reactions, such as inflammation or recession. The purpose of this prospective randomized clinical study was to determine if two different subgingival margin designs influence the periodontal parameters and patient perception. Deep chamfer and feather-edge preparations were compared on 58 patients with 6 months follow-up. Statistically significant differences were present for bleeding on probing, gingival recession, and patient satisfaction. Feather-edge preparation was associated with increased bleeding on probing and deep chamfer with increased recession; improved patient comfort was registered with chamfer margin design. Subgingival margins are technique sensitive, especially when feather-edge design is selected. This margin design may facilitate soft tissue stability but can expose the patient to an increased risk of gingival inflammation.", "title": "" }, { "docid": "neg:1840237_8", "text": "Companies are increasingly allocating more of their marketing spending to social media programs. Yet there is little research about how social media use is associated with consumer–brand relationships. We conducted three studies to explore how individual and national differences influence the relationship between social media use and customer brand relationships. The first study surveyed customers in France, the U.K. and U.S. and compared those who engage with their favorite brands via social media with those who do not. The findings indicated that social media use was positively related with brand relationship quality and the effect was more pronounced with high anthropomorphism perceptions (the extent to which consumers' associate human characteristics with brands). Two subsequent experiments further validated these findings and confirmed that cultural differences, specifically uncertainty avoidance, moderated these results. We obtained robust and convergent results from survey and experimental data using both student and adult consumer samples and testing across three product categories (athletic shoes, notebook computers, and automobiles). The results offer cross-national support for the proposition that engaging customers via social media is associated with higher consumer–brand relationships and word of mouth communications when consumers anthropomorphize the brand and they avoid uncertainty. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840237_9", "text": "To compensate for the inherent unreliability of RFID data streams, most RFID middleware systems employ a \"smoothing filter\", a sliding-window aggregate that interpolates for lost readings. In this paper, we propose SMURF, the first declarative, adaptive smoothing filter for RFID data cleaning. SMURF models the unreliability of RFID readings by viewing RFID streams as a statistical sample of tags in the physical world, and exploits techniques grounded in sampling theory to drive its cleaning processes. Through the use of tools such as binomial sampling and π-estimators, SMURF continuously adapts the smoothing window size in a principled manner to provide accurate RFID data to applications.", "title": "" }, { "docid": "neg:1840237_10", "text": "According to the survey done by IBM business consulting services in 2006, global CEOs stated that business model innovation will have a greater impact on operating margin growth, than product or service innovation. We also noticed that some enterprises in China's real estate industry have improved their business models for sustainable competitive advantage and surplus profit in recently years. Based on the case studies of Shenzhen Vanke, as well as literature review, a framework for business model innovation has been developed. The framework provides an integrated means of making sense of new business model. These include critical dimensions of new customer value propositions, technological innovation, collaboration of the business infrastructure and the economic feasibility of a new business model.", "title": "" }, { "docid": "neg:1840237_11", "text": "A compact design of a circularly-polarized microstrip antenna in order to achieve dual-band behavior for Radio Frequency Identification (RFID) applications is presented, defected ground structure (DGS) technique is used to miniaturize and get a dual-band antenna, the entire size is 38×40×1.58 mm3. This antenna was designed to cover both ultra-height frequency (740MHz ~ 1GHz) and slow height frequency (2.35 GHz ~ 2.51GHz), return loss <; -10 dB, the 3-dB axial ratio bandwidths are about 110 MHz at the lower band (900 MHz).", "title": "" }, { "docid": "neg:1840237_12", "text": "Variation in self concept and academic achievement particularly among the visually impaired pupils has not been conclusively studied. The purpose of this study was therefore to determine if there were gender differences in self-concept and academic achievement among visually impaired pupils in Kenya .The population of the study was 291 visually impaired pupils. A sample of 262 respondents was drawn from the population by stratified random sampling technique based on their sex (152 males and 110 females). Two instruments were used in this study: Pupils’ self-concept and academic achievement test. Data analysis was done at p≤0.05 level of significance. The t test was used to test the relationship between self-concept and achievement. The data was analyzed using Analysis of Variance (ANOVA) structure. The study established that there were indeed gender differences in self-concept among visually impaired pupils in Kenya. The study therefore recommend that the lower self-concept observed among boys should be enhanced by giving counseling and early intervention to this group of pupils with a view to helping them accept their disability.", "title": "" }, { "docid": "neg:1840237_13", "text": "In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems. Disciplines Computer Sciences Comments Vanderwende, L., Suzuki, H., Brockett, C., & Nenkova, A., Beyond SumBasic: Task-Focused Summarization with Sentence Simplification and Lexical Expansion, Information Processing and Management, Special Issue on Summarization Volume 43, Issue 6, 2007, doi: 10.1016/j.ipm.2007.01.023 This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/cis_papers/736", "title": "" }, { "docid": "neg:1840237_14", "text": "A License plate recognition (LPR) system can be divided into the following steps: preprocessing, plate region extraction, plate region thresholding, character segmentation, character recognition and post-processing. For step 2, a combination of color and shape information of plate is used and a satisfactory extraction result is achieved. For step 3, first channel is selected, then threshold is computed and finally the region is thresholded. For step 4, the character is segmented along vertical, horizontal direction and some tentative optimizations are applied. For step 5, minimum Euclidean distance based template matching is used. And for those confusing characters such as '8' & 'B' and '0' & 'D', a special processing is necessary. And for the final step, validity is checked by machine and manual. The experiment performed by program based on aforementioned algorithms indicates that our LPR system based on color image processing is quite quick and accurate.", "title": "" }, { "docid": "neg:1840237_15", "text": "A k-uniform hypergraph is hamiltonian if for some cyclic ordering of its vertex set, every k consecutive vertices form an edge. In 1952 Dirac proved that if the minimum degree in an n-vertex graph is at least n/2 then the graph is hamiltonian. We prove an approximate version of an analogous result for uniform hypergraphs: For every k ≥ 3 and γ > 0, and for all n large enough, a sufficient condition for an n-vertex k-uniform hypergraph to be hamiltonian is that each (k − 1)-element set of vertices is contained in at least (1/2 + γ)n edges. Research supported by NSF grant DMS-0300529. Research supported by KBN grant 2 P03A 015 23 and N201036 32/2546. Part of research performed at Emory University, Atlanta. Research supported by NSF grant DMS-0100784", "title": "" }, { "docid": "neg:1840237_16", "text": "This article describes Apron, a freely available library dedicated to the static analysis of the numerical variables of programs by abstract interpretation. Its goal is threefold: provide analysis implementers with ready-to-use numerical abstractions under a unified API, encourage the research in numerical abstract domains by providing a platform for integration and comparison, and provide teaching and demonstration tools to disseminate knowledge on abstract interpretation.", "title": "" }, { "docid": "neg:1840237_17", "text": "We introduce an online neural sequence to sequence model that learns to alternate between encoding and decoding segments of the input as it is read. By independently tracking the encoding and decoding representations our algorithm permits exact polynomial marginalization of the latent segmentation during training, and during decoding beam search is employed to find the best alignment path together with the predicted output sequence. Our model tackles the bottleneck of vanilla encoder-decoders that have to read and memorize the entire input sequence in their fixedlength hidden states before producing any output. It is different from previous attentive models in that, instead of treating the attention weights as output of a deterministic function, our model assigns attention weights to a sequential latent variable which can be marginalized out and permits online generation. Experiments on abstractive sentence summarization and morphological inflection show significant performance gains over the baseline encoder-decoders.", "title": "" }, { "docid": "neg:1840237_18", "text": "This paper presents a simple and efficient method for online signature verification. The technique is based on a feature set comprising of several histograms that can be computed efficiently given a raw data sequence of an online signature. The features which are represented by a fixed-length vector can not be used to reconstruct the original signature, thereby providing privacy to the user's biometric trait in case the stored template is compromised. To test the verification performance of the proposed technique, several experiments were conducted on the well known MCYT-100 and SUSIG datasets including both skilled forgeries and random forgeries. Experimental results demonstrate that the performance of the proposed technique is comparable to state-of-art algorithms despite its simplicity and efficiency.", "title": "" }, { "docid": "neg:1840237_19", "text": "All aspects of human-computer interaction, from the high-level concerns of organizational context and system requirements to the conceptual, semantic, syntactic, and lexical levels of user interface design, are ultimately funneled through physical input and output actions and devices. The fundamental task in computer input is to move information from the brain of the user to the computer. Progress in this discipline attempts to increase the useful bandwidth across that interface by seeking faster, more natural, and more convenient means for a user to transmit information to a computer. This article mentions some of the technical background for this area, surveys the range of input devices currently in use and emerging, and considers future trends in input.", "title": "" } ]
1840238
Semantically-Guided Video Object Segmentation
[ { "docid": "pos:1840238_0", "text": "Both image segmentation and dense 3D modeling from images represent an intrinsically ill-posed problem. Strong regularizers are therefore required to constrain the solutions from being 'too noisy'. Unfortunately, these priors generally yield overly smooth reconstructions and/or segmentations in certain regions whereas they fail in other areas to constrain the solution sufficiently. In this paper we argue that image segmentation and dense 3D reconstruction contribute valuable information to each other's task. As a consequence, we propose a rigorous mathematical framework to formulate and solve a joint segmentation and dense reconstruction problem. Image segmentations provide geometric cues about which surface orientations are more likely to appear at a certain location in space whereas a dense 3D reconstruction yields a suitable regularization for the segmentation problem by lifting the labeling from 2D images to 3D space. We show how appearance-based cues and 3D surface orientation priors can be learned from training data and subsequently used for class-specific regularization. Experimental results on several real data sets highlight the advantages of our joint formulation.", "title": "" }, { "docid": "pos:1840238_1", "text": "In this work, we propose a novel approach to video segmentation that operates in bilateral space. We design a new energy on the vertices of a regularly sampled spatiotemporal bilateral grid, which can be solved efficiently using a standard graph cut label assignment. Using a bilateral formulation, the energy that we minimize implicitly approximates long-range, spatio-temporal connections between pixels while still containing only a small number of variables and only local graph edges. We compare to a number of recent methods, and show that our approach achieves state-of-the-art results on multiple benchmarks in a fraction of the runtime. Furthermore, our method scales linearly with image size, allowing for interactive feedback on real-world high resolution video.", "title": "" }, { "docid": "pos:1840238_2", "text": "We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.", "title": "" } ]
[ { "docid": "neg:1840238_0", "text": "The mobile industry has experienced a dramatic growth; it evolves from analog to digital 2G (GSM), then to high date rate cellular wireless communication such as 3G (WCDMA), and further to packet optimized 3.5G (HSPA) and 4G (LTE and LTE advanced) systems. Today, the main design challenges of mobile phone antenna are the requirements of small size, built-in structure, and multisystems in multibands, including all cellular 2G, 3G, 4G, and other noncellular radio-frequency (RF) bands, and moreover the need for a nice appearance and meeting all standards and requirements such as specific absorption rates (SARs), hearing aid compatibility (HAC), and over the air (OTA). This paper gives an overview of some important antenna designs and progress in mobile phones in the last 15 years, and presents the recent development on new antenna technology for LTE and compact multiple-input-multiple-output (MIMO) terminals.", "title": "" }, { "docid": "neg:1840238_1", "text": "The deleterious effects of plastic debris on the marine environment were reviewed by bringing together most of the literature published so far on the topic. A large number of marine species is known to be harmed and/or killed by plastic debris, which could jeopardize their survival, especially since many are already endangered by other forms of anthropogenic activities. Marine animals are mostly affected through entanglement in and ingestion of plastic litter. Other less known threats include the use of plastic debris by \"invader\" species and the absorption of polychlorinated biphenyls from ingested plastics. Less conspicuous forms, such as plastic pellets and \"scrubbers\" are also hazardous. To address the problem of plastic debris in the oceans is a difficult task, and a variety of approaches are urgently required. Some of the ways to mitigate the problem are discussed.", "title": "" }, { "docid": "neg:1840238_2", "text": "Graphical models are usually learned without regard to the cost of doing inference with them. As a result, even if a good model is learned, it may perform poorly at prediction, because it requires approximate inference. We propose an alternative: learning models with a score function that directly penalizes the cost of inference. Specifically, we learn arithmetic circuits with a penalty on the number of edges in the circuit (in which the cost of inference is linear). Our algorithm is equivalent to learning a Bayesian network with context-specific independence by greedily splitting conditional distributions, at each step scoring the candidates by compiling the resulting network into an arithmetic circuit, and using its size as the penalty. We show how this can be done efficiently, without compiling a circuit from scratch for each candidate. Experiments on several real-world domains show that our algorithm is able to learn tractable models with very large treewidth, and yields more accurate predictions than a standard context-specific Bayesian network learner, in far less time.", "title": "" }, { "docid": "neg:1840238_3", "text": "Understanding topic hierarchies in text streams and their evolution patterns over time is very important in many applications. In this paper, we propose an evolutionary multi-branch tree clustering method for streaming text data. We build evolutionary trees in a Bayesian online filtering framework. The tree construction is formulated as an online posterior estimation problem, which considers both the likelihood of the current tree and conditional prior given the previous tree. We also introduce a constraint model to compute the conditional prior of a tree in the multi-branch setting. Experiments on real world news data demonstrate that our algorithm can better incorporate historical tree information and is more efficient and effective than the traditional evolutionary hierarchical clustering algorithm.", "title": "" }, { "docid": "neg:1840238_4", "text": "Snake robots, sometimes called hyper-redundant mechanisms, can use their many degrees of freedom to achieve a variety of locomotive capabilities. These capabilities are ideally suited for disaster response because the snake robot can thread through tightly packed volumes, accessing locations that people and conventional machinery otherwise cannot. Snake robots also have the advantage of possessing a variety of locomotion capabilities that conventional robots do not. Just like their biological counterparts, snake robots achieve these locomotion capabilities using cyclic motions called gaits. These cyclic motions directly control the snake robot’s internal degrees of freedom which, in turn, causes a net motion, say forward, lateral and rotational, for the snake robot. The gaits described in this paper fall into two categories: parameterized and scripted. The parameterized gaits, as their name suggests, can be described by a relative simple parameterized function, whereas the scripted cannot. This paper describes the functions we prescribed for gait generation and our experiences in making these robots operate in real experiments. © Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2009", "title": "" }, { "docid": "neg:1840238_5", "text": "Purpose – This UK-based research aims to build on the US-based work of Keller and Aaker, which found a significant association between “company credibility” (via a brand’s “expertise” and “trustworthiness”) and brand extension acceptance, hypothesising that brand trust, measured via two correlate dimensions, is significantly related to brand extension acceptance. Design/methodology/approach – Discusses brand extension and various prior, validated influences on its success. Focuses on the construct of trust and develops hypotheses about the relationship of brand trust with brand extension acceptance. The hypotheses are then tested on data collected from consumers in the UK. Findings – This paper, using 368 consumer responses to nine, real, low involvement UK product and service brands, finds support for a significant association between the variables, comparable in strength with that between media weight and brand share, and greater than that delivered by the perceived quality level of the parent brand. Originality/value – The research findings, which develop a sparse literature in this linkage area, are of significance to marketing practitioners, since brand trust, already associated with brand equity and brand loyalty, and now with brand extension, needs to be managed and monitored with care. The paper prompts further investigation of the relationship between brand trust and brand extension acceptance in other geographic markets and with other higher involvement categories.", "title": "" }, { "docid": "neg:1840238_6", "text": "We have developed an interactive pop-up book called Electronic Popables to explore paper-based computing. Our book integrates traditional pop-up mechanisms with thin, flexible, paper-based electronics and the result is an artifact that looks and functions much like an ordinary pop-up, but has added elements of dynamic interactivity. This paper introduces the book and, through it, a library of paper-based sensors and a suite of paper-electronics construction techniques. We also reflect on the unique and under-explored opportunities that arise from combining material experimentation, artistic design, and engineering.", "title": "" }, { "docid": "neg:1840238_7", "text": "Combined star-delta windings in electrical machines result in a higher fundamental winding factor and cause a smaller spatial harmonic content. This leads to lower I2R losses in the stator and the rotor winding and thus to an increased efficiency. However, compared to an equivalent six-phase winding, additional spatial harmonics are generated due to the different magnetomotive force in the star and delta part of the winding. In this paper, a complete theory and analysis method for the analytical calculation of the efficiency of induction motors equipped with combined star-delta windings is developed. The method takes into account the additional harmonic content due to the different magnetomotive force in the star and delta part. To check the analysis' validity, an experimental test is reported both on a cage induction motor equipped with a combined star-delta winding in the stator and on a reference motor with the same core but with a classical three-phase winding.", "title": "" }, { "docid": "neg:1840238_8", "text": "In this paper, we explore various aspects of fusing LIDAR and color imagery for pedestrian detection in the context of convolutional neural networks (CNNs), which have recently become state-of-art for many vision problems. We incorporate LIDAR by up-sampling the point cloud to a dense depth map and then extracting three features representing different aspects of the 3D scene. We then use those features as extra image channels. Specifically, we leverage recent work on HHA [9] (horizontal disparity, height above ground, and angle) representations, adapting the code to work on up-sampled LIDAR rather than Microsoft Kinect depth maps. We show, for the first time, that such a representation is applicable to up-sampled LIDAR data, despite its sparsity. Since CNNs learn a deep hierarchy of feature representations, we then explore the question: At what level of representation should we fuse this additional information with the original RGB image channels? We use the KITTI pedestrian detection dataset for our exploration. We first replicate the finding that region-CNNs (R-CNNs) [8] can outperform the original proposal mechanism using only RGB images, but only if fine-tuning is employed. Then, we show that: 1) using HHA features and RGB images performs better than RGB-only, even without any fine-tuning using large RGB web data, 2) fusing RGB and HHA achieves the strongest results if done late, but, under a parameter or computational budget, is best done at the early to middle layers of the hierarchical representation, which tend to represent midlevel features rather than low (e.g. edges) or high (e.g. object class decision) level features, 3) some of the less successful methods have the most parameters, indicating that increased classification accuracy is not simply a function of increased capacity in the neural network.", "title": "" }, { "docid": "neg:1840238_9", "text": "Online marketing is one of the best practices used to establish a brand and to increase its popularity. Advertisements are used in a better way to showcase the company’s product/service and give rise to a worthy online marketing strategy. Posting an advertisement on utilitarian web pages helps to maximize brand reach and get a better feedback. Now-a-days companies are cautious of their brand image on the Internet due to the growing number of Internet users. Since there are billions of Web sites on the Internet, it becomes difficult for companies to really decide where to advertise on the Internet for brand popularity. What if, the company advertise on a page which is visited by less number of the interested (for a particular type of product) users instead of a web page which is visited by more number of the interested users?—this doubt and uncertainty—is a core issue faced by many companies. This research paper presents a Brand analysis framework and suggests some experimental practices to ensure efficiency of the proposed framework. This framework is divided into three components—(1) Web site network formation framework—a framework that forms a Web site network of a specific search query obtained from resultant web pages of three search engines-Google, Yahoo & Bing and their associated web pages; (2) content scraping framework—it crawls the content of web pages existing in the framework-formed Web site network; (3) rank assignment of networked web pages—text edge processing algorithm has been used to find out terms of interest and their occurrence associated with search query. We have further applied sentiment analysis to validate positive or negative impact of the sentences, having the search term and its associated terms (with reference to the search query) to identify impact of web page. Later, on the basis of both—text edge analysis and sentiment analysis results, we assigned a rank to networked web pages and online social network pages. In this research work, we present experiments for ‘Motorola smart phone,’ ‘LG smart phone’ and ‘Samsung smart phone’ as search query and sampled the Web site network of top 20 search results of all three search engines and examined up to 60 search results for each search engine. This work is useful to target the right online location for specific brand marketing. Once the brand knows the web pages/social media pages containing high brand affinity and ensures that the content of high affinity web page/social media page has a positive impact, we advertise at that respective online location. Thus, targeted brand analysis framework for online marketing not only has benefits for the advertisement agencies but also for the customers.", "title": "" }, { "docid": "neg:1840238_10", "text": "5-Lipoxygenase (5-LO) plays a pivotal role in the progression of atherosclerosis. Therefore, this study investigated the molecular mechanisms involved in 5-LO expression on monocytes induced by LPS. Stimulation of THP-1 monocytes with LPS (0~3 µg/ml) increased 5-LO promoter activity and 5-LO protein expression in a concentration-dependent manner. LPS-induced 5-LO expression was blocked by pharmacological inhibition of the Akt pathway, but not by inhibitors of MAPK pathways including the ERK, JNK, and p38 MAPK pathways. In line with these results, LPS increased the phosphorylation of Akt, suggesting a role for the Akt pathway in LPS-induced 5-LO expression. In a promoter activity assay conducted to identify transcription factors, both Sp1 and NF-κB were found to play central roles in 5-LO expression in LPS-treated monocytes. The LPS-enhanced activities of Sp1 and NF-κB were attenuated by an Akt inhibitor. Moreover, the LPS-enhanced phosphorylation of Akt was significantly attenuated in cells pretreated with an anti-TLR4 antibody. Taken together, 5-LO expression in LPS-stimulated monocytes is regulated at the transcriptional level via TLR4/Akt-mediated activations of Sp1 and NF-κB pathways in monocytes.", "title": "" }, { "docid": "neg:1840238_11", "text": "BACKGROUND\nBright light therapy was shown to be a promising treatment for depression during pregnancy in a recent open-label study. In an extension of this work, we report findings from a double-blind placebo-controlled pilot study.\n\n\nMETHOD\nTen pregnant women with DSM-IV major depressive disorder were randomly assigned from April 2000 to January 2002 to a 5-week clinical trial with either a 7000 lux (active) or 500 lux (placebo) light box. At the end of the randomized controlled trial, subjects had the option of continuing in a 5-week extension phase. The Structured Interview Guide for the Hamilton Depression Scale-Seasonal Affective Disorder Version was administered to assess changes in clinical status. Salivary melatonin was used to index circadian rhythm phase for comparison with antidepressant results.\n\n\nRESULTS\nAlthough there was a small mean group advantage of active treatment throughout the randomized controlled trial, it was not statistically significant. However, in the longer 10-week trial, the presence of active versus placebo light produced a clear treatment effect (p =.001) with an effect size (0.43) similar to that seen in antidepressant drug trials. Successful treatment with bright light was associated with phase advances of the melatonin rhythm.\n\n\nCONCLUSION\nThese findings provide additional evidence for an active effect of bright light therapy for antepartum depression and underscore the need for an expanded randomized clinical trial.", "title": "" }, { "docid": "neg:1840238_12", "text": "We developed an optimizing compiler for intrusion detection rules popularized by an open-source Snort Network Intrusion Detection System (www.snort.org). While Snort and Snort-like rules are usually thought of as a list of independent patterns to be tested in a sequential order, we demonstrate that common compilation techniques are directly applicable to Snort rule sets and are able to produce high-performance matching engines. SNORTRAN combines several compilation techniques, including cost-optimized decision trees, pattern matching precompilation, and string set clustering. Although all these techniques have been used before in other domain-specific languages, we believe their synthesis in SNORTRAN is original and unique. Introduction Snort [RO99] is a popular open-source Network Intrusion Detection System (NIDS). Snort is controlled by a set of pattern/action rules residing in a configuration file of a specific format. Due to Snort’s popularity, Snort-like rules are accepted by several other NIDS [FSTM, HANK]. In this paper we describe an optimizing compiler for Snort rule sets called SNORTRAN that incorporates ideas of pattern matching compilation based on cost-optimized decision trees [DKP92, KS88] with setwise string search algorithms popularized by recent research in highperformance NIDS detection engines [FV01, CC01, GJMP]. The two main design goals were performance and compatibility with the original Snort rule interpreter. The primary application area for NIDS is monitoring IP traffic inside and outside of firewalls, looking for unusual activities that can be attributed to external attacks or internal misuse. Most NIDS are designed to handle T1/partial T3 traffic, but as the number of the known vulnerabilities grows and more and more weight is given to internal misuse monitoring on high-throughput networks (100Mbps/1Gbps), it gets harder to keep up with the traffic without dropping too many packets to make detection ineffective. Throwing hardware at the problem is not always possible because of growing maintenance and support costs, let alone the fact that the problem of making multi-unit system work in realistic environment is as hard as the original performance problem. Bottlenecks of the detection process were identified by many researchers and practitioners [FV01, ND02, GJMP], and several approaches were proposed [FV01, CC01]. Our benchmarking supported the performance analysis made by M. Fisk and G. Varghese [FV01], adding some interesting findings on worst-case behavior of setwise string search algorithms in practice. Traditionally, NIDS are designed around a packet grabber (system-specific or libcap) getting traffic packets off the wire, combined with preprocessors, packet decoders, and a detection engine looking for a static set of signatures loaded from a rule file at system startup. Snort [SNORT] and", "title": "" }, { "docid": "neg:1840238_13", "text": "A low-voltage-swing MOSFET gate drive technique is proposed in this paper for enhancing the efficiency characteristics of high-frequency-switching dc-dc converters. The parasitic power dissipation of a dc-dc converter is reduced by lowering the voltage swing of the power transistor gate drivers. A comprehensive circuit model of the parasitic impedances of a monolithic buck converter is presented. Closed-form expressions for the total power dissipation of a low-swing buck converter are proposed. The effect of reducing the MOSFET gate voltage swings is explored with the proposed circuit model. A range of design parameters is evaluated, permitting the development of a design space for full integration of active and passive devices of a low-swing buck converter on the same die, for a target CMOS technology. The optimum gate voltage swing of a power MOSFET that maximizes efficiency is lower than a standard full voltage swing. An efficiency of 88% at a switching frequency of 102 MHz is achieved for a voltage conversion from 1.8 to 0.9 V with a low-swing dc-dc converter based on a 0.18-/spl mu/m CMOS technology. The power dissipation of a low-swing dc-dc converter is reduced by 27.9% as compared to a standard full-swing dc-dc converter.", "title": "" }, { "docid": "neg:1840238_14", "text": "OBJECTIVE\nTo improve walking and other aspects of physical function with a progressive 6-month exercise program in patients with multiple sclerosis (MS).\n\n\nMETHODS\nMS patients with mild to moderate disability (Expanded Disability Status Scale scores 1.0 to 5.5) were randomly assigned to an exercise or control group. The intervention consisted of strength and aerobic training initiated during 3-week inpatient rehabilitation and continued for 23 weeks at home. The groups were evaluated at baseline and at 6 months. The primary outcome was walking speed, measured by 7.62 m and 500 m walk tests. Secondary outcomes included lower extremity strength, upper extremity endurance and dexterity, peak oxygen uptake, and static balance. An intention-to-treat analysis was used.\n\n\nRESULTS\nNinety-one (96%) of the 95 patients entering the study completed it. Change between groups was significant in the 7.62 m (p = 0.04) and 500 m walk tests (p = 0.01). In the 7.62 m walk test, 22% of the exercising patients showed clinically meaningful improvements. The exercise group also showed increased upper extremity endurance as compared to controls. No other noteworthy exercise-induced changes were observed. Exercise adherence varied considerably among the exercisers.\n\n\nCONCLUSIONS\nWalking speed improved in this randomized study. The results confirm that exercise is safe for multiple sclerosis patients and should be recommended for those with mild to moderate disability.", "title": "" }, { "docid": "neg:1840238_15", "text": "This paper describes modelling and testing of a digital distance relay for transmission line protection using MATLAB/SIMULINK. SIMULINK’s Power System Blockset (PSB) is used for detailed modelling of a power system network and fault simulation. MATLAB is used to implement programs of digital distance relaying algorithms and to serve as main software environment. The technique is an interactive simulation environment for relaying algorithm design and evaluation. The basic principles of a digital distance relay and some related filtering techniques are also described in this paper. A 345 kV, 100 km transmission line and a MHO type distance relay are selected as examples for fault simulation and relay testing. Some simulation results are given.", "title": "" }, { "docid": "neg:1840238_16", "text": "AIM\nBefore an attempt is made to develop any population-specific behavioural change programme, it is important to know what the factors that influence behaviours are. The aim of this study was to identify what are the perceived determinants that attribute to young people's choices to both consume and misuse alcohol.\n\n\nMETHOD\nUsing a descriptive survey design, a web-based questionnaire based on the Theory of Triadic Influence was administered to students aged 18-29 years at one university in Northern Ireland.\n\n\nRESULTS\nOut of the total respondents ( n = 595), knowledge scores on alcohol consumption and the health risks associated with heavy episodic drinking were high (92.4%, n = 550). Over half (54.1%, n = 322) cited the Internet as their main source for alcohol-related information. The three most perceived influential factors of inclination to misuse alcohol were strains/conflict within the family home ( M = 2.98, standard deviation ( SD) = 0.18, 98.7%, n = 587), risk taking/curiosity behaviour ( M = 2.97, SD = 0.27, 97.3%, n = 579) and the desire not to be socially alienated ( M = 2.94, SD = 0.33, 96%, n = 571). Females were statistically significantly more likely to be influenced by desire not to be socially alienated than males (  p = .029). Religion and personal reasons were the most commonly cited reasons for not drinking.\n\n\nCONCLUSION\nFuture initiatives to reduce alcohol misuse and alcohol-related harms need to focus on changing social normative beliefs and attitudes around alcohol consumption and the family and environmental factors that influence the choice of young adult's alcohol drinking behaviour. Investment in multi-component interventions may be a useful approach.", "title": "" }, { "docid": "neg:1840238_17", "text": "While drug toxicity (especially hepatotoxicity) is the most frequent reason cited for withdrawal of an approved drug, no simple solution exists to adequately predict such adverse events. Simple cytotoxicity assays in HepG2 cells are relatively insensitive to human hepatotoxic drugs in a retrospective analysis of marketed pharmaceuticals. In comparison, a panel of pre-lethal mechanistic cellular assays hold the promise to deliver a more sensitive approach to detect endpoint-specific drug toxicities. The panel of assays covered by this review includes steatosis, cholestasis, phospholipidosis, reactive intermediates, mitochondria membrane function, oxidative stress, and drug interactions. In addition, the use of metabolically competent cells or the introduction of major human hepatocytes in these in vitro studies allow a more complete picture of potential drug side effect. Since inter-individual therapeutic index (TI) may differ from patient to patient, the rational use of one or more of these cellular assay and targeted in vivo exposure data may allow pharmaceutical scientists to select drug candidates with a higher TI potential in the drug discovery phase.", "title": "" }, { "docid": "neg:1840238_18", "text": "The paradigm of simulated annealing is applied to the problem of drawing graphs “nicely.” Our algorithm deals with general undirected graphs with straight-line edges, and employs several simple criteria for the aesthetic quality of the result. The algorithm is flexible, in that the relative weights of the criteria can be changed. For graphs of modest size it produces good results, competitive with those produced by other methods, notably, the “spring method” and its variants.", "title": "" }, { "docid": "neg:1840238_19", "text": "Model reduction of the Markov process is a basic problem in modeling statetransition systems. Motivated by the state aggregation approach rooted in control theory, we study the statistical state compression of a finite-state Markov chain from empirical trajectories. Through the lens of spectral decomposition, we study the rank and features of Markov processes, as well as properties like representability, aggregatability and lumpability. We develop a class of spectral state compression methods for three tasks: (1) estimate the transition matrix of a low-rank Markov model, (2) estimate the leading subspace spanned by Markov features, and (3) recover latent structures of the state space like state aggregation and lumpable partition. The proposed methods provide an unsupervised learning framework for identifying Markov features and clustering states. We provide upper bounds for the estimation errors and nearly matching minimax lower bounds. Numerical studies are performed on synthetic data and a dataset of New York City taxi trips. ∗Anru Zhang is with the Department of Statistics, University of Wisconsin-Madison, Madison, WI 53706, E-mail: anruzhang@stat.wisc.edu; Mengdi Wang is with the Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ 08544, E-mail: mengdiw@princeton.edu. †", "title": "" } ]
1840239
Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning
[ { "docid": "pos:1840239_0", "text": "Image patch classification is an important task in many different medical imaging applications. In this work, we have designed a customized Convolutional Neural Networks (CNN) with shallow convolution layer to classify lung image patches with interstitial lung disease (ILD). While many feature descriptors have been proposed over the past years, they can be quite complicated and domain-specific. Our customized CNN framework can, on the other hand, automatically and efficiently learn the intrinsic image features from lung image patches that are most suitable for the classification purpose. The same architecture can be generalized to perform other medical image or texture classification tasks.", "title": "" }, { "docid": "pos:1840239_1", "text": "Convolutional neural networks (CNNs) have emerged as the most powerful technique for a range of different tasks in computer vision. Recent work suggested that CNN features are generic and can be used for classification tasks outside the exact domain for which the networks were trained. In this work we use the features from one such network, OverFeat, trained for object detection in natural images, for nodule detection in computed tomography scans. We use 865 scans from the publicly available LIDC data set, read by four thoracic radiologists. Nodule candidates are generated by a state-of-the-art nodule detection system. We extract 2D sagittal, coronal and axial patches for each nodule candidate and extract 4096 features from the penultimate layer of OverFeat and classify these with linear support vector machines. We show for various configurations that the off-the-shelf CNN features perform surprisingly well, but not as good as the dedicated detection system. When both approaches are combined, significantly better results are obtained than either approach alone. We conclude that CNN features have great potential to be used for detection tasks in volumetric medical data.", "title": "" } ]
[ { "docid": "neg:1840239_0", "text": "Ranking microblogs, such as tweets, as search results for a query is challenging, among other things because of the sheer amount of microblogs that are being generated in real time, as well as the short length of each individual microblog. In this paper, we describe several new strategies for ranking microblogs in a real-time search engine. Evaluating these ranking strategies is non-trivial due to the lack of a publicly available ground truth validation dataset. We have therefore developed a framework to obtain such validation data, as well as evaluation measures to assess the accuracy of the proposed ranking strategies. Our experiments demonstrate that it is beneficial for microblog search engines to take into account social network properties of the authors of microblogs in addition to properties of the microblog itself.", "title": "" }, { "docid": "neg:1840239_1", "text": "A new CMOS dynamic comparator using dual input single output differential amplifier as latch stage suitable for high speed analog-to-digital converters with High Speed, low power dissipation and immune to noise than the previous reported work is proposed. Backto-back inverter in the latch stage is replaced with dual-input single output differential amplifier. This topology completely removes the noise that is present in the input. The structure shows lower power dissipation and higher speed than the conventional comparators. The circuit is simulated with 1V DC supply voltage and 250 MHz clock frequency. The proposed topology is based on two cross coupled differential pairs positive feedback and switchable current sources, has a lower power dissipation, higher speed, less area, and it is shown to be very robust against transistor mismatch, noise immunity. Previous reported comparators are designed and simulated their DC response and Transient response in Cadence® Virtuoso Analog Design Environment using GPDK 90nm technology. Layouts of the proposed comparator have been done in Cadence® Virtuoso Layout XL Design Environment. DRC and LVS has been checked and compared with the corresponding circuits and RC extracted diagram has been generated. After that post layout simulation with 1V supply voltage has been done and compared the speed, power dissipation, Area, delay with the results before layout and the superior features of the proposed comparator are established.", "title": "" }, { "docid": "neg:1840239_2", "text": "The conditional intervention principle is a formal principle that relates patterns of interventions and outcomes to causal structure. It is a central assumption of experimental design and the causal Bayes net formalism. Two studies suggest that preschoolers can use the conditional intervention principle to distinguish causal chains, common cause and interactive causal structures even in the absence of differential spatiotemporal cues and specific mechanism knowledge. Children were also able to use knowledge of causal structure to predict the patterns of evidence that would result from interventions. A third study suggests that children's spontaneous play can generate evidence that would support such accurate causal learning.", "title": "" }, { "docid": "neg:1840239_3", "text": "We present a new mechanical design for a 3-DOF haptic device with spherical kinematics (pitch, yaw, and prismatic radial). All motors are grounded in the base to decrease inertia and increase compactness near the user's hand. An aluminum-aluminum friction differential allows for actuation of pitch and yaw with mechanical robustness while allowing a cable transmission to route through its center. This novel cabling system provides simple, compact, and high-performance actuation of the radial DOF independent of motions in pitch and yaw. We show that the device's capabilities are suitable for general haptic rendering, as well as specialized applications of spherical kinematics such as laparoscopic surgery simulation.", "title": "" }, { "docid": "neg:1840239_4", "text": "It is well-known that deep models could extract robust and abstract features. We propose a efficient facial expression recognition model based on transfer features from deep convolutional networks (ConvNets). We train the deep ConvNets through the task of 1580-class face identification on the MSRA-CFW database and transfer high-level features from the trained deep model to recognize expression. To train and test the facial expression recognition model on a large scope, we built a facial expression database of seven basic emotion states and 2062 imbalanced samples depending on four facial expression databases (CK+, JAFFE, KDEF, Pain expressions form PICS). Compared with 50.65% recognition rate based on Gabor features with the seven-class SVM and 78.84% recognition rate based on distance features with the seven-class SVM, we achieve average 80.49% recognition rate with the seven-class SVM classifier on the self-built facial expression database. Considering occluded face in reality, we test our model in the occluded condition and demonstrate the model could keep its ability of classification in the small occlusion case. To increase the ability further, we improve the facial expression recognition model. The modified model merges high-level features transferred from two trained deep ConvNets of the same structure and the different training sets. The modified model obviously improves its ability of classification in the occluded condition and achieves average 81.50% accuracy on the self-built facial expression database.", "title": "" }, { "docid": "neg:1840239_5", "text": "Pooling plays an important role in generating a discriminative video representation. In this paper, we propose a new semantic pooling approach for challenging event analysis tasks (e.g., event detection, recognition, and recounting) in long untrimmed Internet videos, especially when only a few shots/segments are relevant to the event of interest while many other shots are irrelevant or even misleading. The commonly adopted pooling strategies aggregate the shots indifferently in one way or another, resulting in a great loss of information. Instead, in this work we first define a novel notion of semantic saliency that assesses the relevance of each shot with the event of interest. We then prioritize the shots according to their saliency scores since shots that are semantically more salient are expected to contribute more to the final event analysis. Next, we propose a new isotonic regularizer that is able to exploit the constructed semantic ordering information. The resulting nearly-isotonic support vector machine classifier exhibits higher discriminative power in event analysis tasks. Computationally, we develop an efficient implementation using the proximal gradient algorithm, and we prove new and closed-form proximal steps. We conduct extensive experiments on three real-world video datasets and achieve promising improvements.", "title": "" }, { "docid": "neg:1840239_6", "text": "Internet of things (IoT) or Web of Things (WoT) is a wireless network between smart products or smart things connected to the internet. It is a new and fast developing market which not only connects objects and people but also billions of gadgets and smart devices. With the rapid growth of IoT, there is also a steady increase in security vulnerabilities of the linked objects. For example, a car manufacturer may want to link the systems within a car to smart home network networks to increase sales, but if all the various people involved do not embrace security the system will be exposed to security risks. As a result, there are several new published protocols of IoT, which focus on protecting critical data. However, these protocols face challenges and in this paper, numerous solutions are provided to overcome these problems. The widely used protocols such as, 802.15.4, 6LoWPAN, and RPL are the resenting of the IoT layers PHY/MAC, Adoption and Network. While CoAP (Constrained Application Protocol) is the application layer protocol designed as replication of the HTTP to serve the small devices coming under class 1 and 2. Many implementations of CoAP has been accomplished which indicates it's crucial amd upcoming role in the future of IoT applications. This research article explored the security of CoAP over DTLS incurring many issues and proposed solutions as well as open challenges for future research.", "title": "" }, { "docid": "neg:1840239_7", "text": "The implementation of 3D stereo matching in real time is an important problem for many vision applications and algorithms. The current work, extending previous results by the same authors, presents in detail an architecture which combines the methods of Absolute Differences, Census, and Belief Propagation in an integrated architecture suitable for implementation with Field Programmable Gate Array (FPGA) logic. Emphasis on the present work is placed on the justification of dimensioning the system, as well as detailed design and testing information for a fully placed and routed design to process 87 frames per sec (fps) in 1920 × 1200 resolution, and a fully implemented design for 400 × 320 which runs up to 1570 fps.", "title": "" }, { "docid": "neg:1840239_8", "text": "This paper shows a LTE antenna design with metal-bezel structure for mobile phone application. For this antenna design, it overcomes the limitation of metal industrial design to achieve LTE all band operation by using two of metal-bezel as antenna radiators (radiator1 and 2). The radiator1 uses a tunable element at feed point and connects an inductor at antenna ground terminal to generate three resonate modes at 700∼960 MHz and 1710–2170 MHz. The radiator2 includes a part of metal-bezel and an extended FPC, which depends on a capacitive coupling effect between metal-bezel to provide the high band resonate mode at 2300–2700 MHz. Using this mechanism, the mobile phone can easily realize wideband LTE antenna design and has an attractive industrial design with metal-bezel structure.", "title": "" }, { "docid": "neg:1840239_9", "text": "This analysis focuses on several masculinities conceptualized by Holden Caulfield within the sociohistorical context of the 1950s. When it is understood by heterosexual men that to be masculine means to be emotionally detached and viewing women as sexual objects, they help propagate the belief that expressions of femininity and non-hegemonic masculinities are abnormal behavior and something to be demonized and punished. I propose that Holden’s “craziness” is the result of there being no positive images of heteronormative masculinity and no alternative to the rigid and strictly enforced roles offered to boys as they enter manhood. I suggest that a paradigm shift is needed that starts by collectively recognizing our current forms of patriarchy as harmful to everyone, followed by a reevaluation of how gender is prescribed to youth.", "title": "" }, { "docid": "neg:1840239_10", "text": "Image Synthesis for Self-Supervised Visual Representation Learning", "title": "" }, { "docid": "neg:1840239_11", "text": "s on human factors in computing systems (pp. 815–828). ACM New York, NY, USA. Hudlicka, E. (1997). Summary of knowledge elicitation techniques for requirements analysis (Course material for human computer interaction). Worcester Polytechnic Institute. Kaptelinin, V., & Nardi, B. (2012). Affordances in HCI: Toward a mediated action perspective. In Proceedings of CHI '12 (pp. 967–976).", "title": "" }, { "docid": "neg:1840239_12", "text": "You don't have to deal with ADCs or DACs for long before running across this often quoted formula for the theoretical signal-to-noise ratio (SNR) of a converter. Rather than blindly accepting it on face value, a fundamental knowledge of its origin is important, because the formula encompasses some subtleties which if not understood can lead to significant misinterpretation of both data sheet specifications and converter performance. Remember that this formula represents the theoretical performance of a perfect N-bit ADC. You can compare the actual ADC SNR with the theoretical SNR and get an idea of how the ADC stacks up.", "title": "" }, { "docid": "neg:1840239_13", "text": "BACKGROUND\nEmerging research from psychology and the bio-behavioral sciences recognizes the importance of supporting patients to mobilize their personal strengths to live well with chronic illness. Positive technology and positive computing could be used as underlying design approaches to guide design and development of new technology-based interventions for this user group that support mobilizing their personal strengths.\n\n\nOBJECTIVE\nA codesigning workshop was organized with the aim to explore user requirements and ideas for how technology can be used to help people with chronic illness activate their personal strengths in managing their everyday challenges.\n\n\nMETHODS\nThirty-five participants from diverse backgrounds (patients, health care providers, designers, software developers, and researchers) participated. The workshop combined principles of (1) participatory and service design to enable meaningful participation and collaboration of different stakeholders and (2) an appreciative inquiry methodology to shift participants' attention to positive traits, values, and aspects that are meaningful and life-giving and stimulate participants' creativity, engagement, and collaboration. Utilizing these principles, participants were engaged in group activities to develop ideas for strengths-supportive tools. Each group consisted of 3-8 participants with different backgrounds. All group work was analysed using thematic analyses.\n\n\nRESULTS\nParticipants were highly engaged in all activities and reported a wide variety of requirements and ideas, including more than 150 personal strength examples, more than 100 everyday challenges that could be addressed by using personal strengths, and a wide range of functionality requirements (eg, social support, strength awareness and reflection, and coping strategies). 6 concepts for strength-supportive tools were created. These included the following: a mobile app to support a person to store, reflect on, and mobilize one's strengths (Strengths treasure chest app); \"empathy glasses\" enabling a person to see a situation from another person's perspective (Empathy Simulator); and a mobile app allowing a person to receive supportive messages from close people in a safe user-controlled environment (Cheering squad app). Suggested design elements for making the tools engaging included: metaphors (eg, trees, treasure island), visualization techniques (eg, dashboards, color coding), and multimedia (eg, graphics). Maintaining a positive focus throughout the tool was an important requirement, especially for feedback and framing of content.\n\n\nCONCLUSIONS\nCombining participatory, service design, and appreciative inquiry methods were highly useful to engage participants in creating innovative ideas. Building on peoples' core values and positive experiences empowered the participants to expand their horizons from addressing problems and symptoms, which is a very common approach in health care today, to focusing on their capacities and that which is possible, despite their chronic illness. The ideas and user requirements, combined with insights from relevant theories (eg, positive technology, self-management) and evidence from the related literature, are critical to guide the development of future more personalized and strengths-focused self-management tools.", "title": "" }, { "docid": "neg:1840239_14", "text": "Microplastics, plastics particles <5 mm in length, are a widespread pollutant of the marine environment. Oral ingestion of microplastics has been reported for a wide range of marine biota, but uptake into the body by other routes has received less attention. Here, we test the hypothesis that the shore crab (Carcinus maenas) can take up microplastics through inspiration across the gills as well as ingestion of pre-exposed food (common mussel Mytilus edulis). We used fluorescently labeled polystyrene microspheres (8-10 μm) to show that ingested microspheres were retained within the body tissues of the crabs for up to 14 days following ingestion and up to 21 days following inspiration across the gill, with uptake significantly higher into the posterior versus anterior gills. Multiphoton imaging suggested that most microspheres were retained in the foregut after dietary exposure due to adherence to the hairlike setae and were found on the external surface of gills following aqueous exposure. Results were used to construct a simple conceptual model of particle flow for the gills and the gut. These results identify ventilation as a route of uptake of microplastics into a common marine nonfilter feeding species.", "title": "" }, { "docid": "neg:1840239_15", "text": "We describe and present a new Question Answering (QA) component that can be easily used by the QA research community. It can be used to answer questions over DBpedia and Wikidata. The language support over DBpedia is restricted to English, while it can be used to answer questions in 4 different languages over Wikidata namely English, French, German and Italian. Moreover it supports both full natural language queries as well as keyword queries. We describe the interfaces to access and reuse it and the services it can be combined with. Moreover we show the evaluation results we achieved on the QALD-7 benchmark.", "title": "" }, { "docid": "neg:1840239_16", "text": "A model based on strikingly different philosophical as. sumptions from those currently popular is proposed for the design of online subject catalog access. Three design principles are presented and discussed: uncertainty (subject indexing is indeterminate and probabilis-tic beyond a certain point), variety (by Ashby's law of requisite variety, variety of searcher query must equal variety of document indexing), and complexity (the search process, particularly during the entry and orientation phases, is subtler and more complex, on several grounds, than current models assume). Design features presented are an access phase, including entry and orientation , a hunting phase, and a selection phase. An end-user thesaurus and a front-end system mind are presented as examples of online catalog system components to improve searcher success during entry and orientation. The proposed model is \" wrapped around \" existing Library of Congress subject-heading indexing in such a way as to enhance access greatly without requiring reindexing. It is argued that both for cost reasons and in principle this is a superior approach to other design philosophies .", "title": "" }, { "docid": "neg:1840239_17", "text": "The breaking of solid objects, like glass or pottery, poses a complex problem for computer animation. We present our methods of using physical simulation to drive the animation of breaking objects. Breakage is obtaned in a three-dimensional flexible model as the limit of elastic behavior. This article describes three principal features of the model: a breakage model, a collision-detection/response scheme, and a geometric modeling method. We use networks of point masses connected by springs to represent physical objects that can bend and break. We present effecient collision-detection algorithms, appropriate for simulating the collisions between the various pieces that interact in breakage. The capability of modeling real objects is provided by a technique of building up composite structures from simple lattice models. We applied these methods to animate the breaking of a teapot and other dishware activities in the animationTipsy Turvy shown at Siggraph '89. Animation techniques that rely on physical simulation to control the motion of objects are discussed, and further topics for research are presented.", "title": "" }, { "docid": "neg:1840239_18", "text": "This document describes Tree Kernel-SVM based methods for identifying sentences that could be improved in scientific text. This has the goal of contributing to the body of knowledge that attempt to build assistive tools to aid scientist improve the quality of their writings. Our methods consist of a combination of the output from multiple support vector machines which use Tree Kernel computations. Therefore, features for individual sentences are trees that reflect their grammatical structure. For the AESW 2016 Shared Task we built systems that provide probabilistic and binary outputs by using these models for trees comparisons.", "title": "" }, { "docid": "neg:1840239_19", "text": "An essential aspect of knowing language is knowing the words of that language. This knowledge is usually thought to reside in the mental lexicon, a kind of dictionary that contains information regarding a word's meaning, pronunciation, syntactic characteristics, and so on. In this article, a very different view is presented. In this view, words are understood as stimuli that operate directly on mental states. The phonological, syntactic and semantic properties of a word are revealed by the effects it has on those states.", "title": "" } ]
1840240
Application of stochastic recurrent reinforcement learning to index trading
[ { "docid": "pos:1840240_0", "text": "This paper introduces adaptive reinforcement learning (ARL) as the basis for a fully automated trading system application. The system is designed to trade foreign exchange (FX) markets and relies on a layered structure consisting of a machine learning algorithm, a risk management overlay and a dynamic utility optimization layer. An existing machine-learning method called recurrent reinforcement learning (RRL) was chosen as the underlying algorithm for ARL. One of the strengths of our approach is that the dynamic optimization layer makes a fixed choice of model tuning parameters unnecessary. It also allows for a risk-return trade-off to be made by the user within the system. The trading system is able to make consistent gains out-of-sample while avoiding large draw-downs. q 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "pos:1840240_1", "text": "This study investigates high frequency currency trading with neural networks trained via Recurrent Reinforcement Learning (RRL). We compare the performance of single layer networks with networks having a hidden layer, and examine the impact of the fixed system parameters on performance. In general, we conclude that the trading systems may be effective, but the performance varies widely for different currency markets and this variability cannot be explained by simple statistics of the markets. Also we find that the single layer network outperforms the two layer network in this application.", "title": "" } ]
[ { "docid": "neg:1840240_0", "text": "Lymphedema is a chronic, progressive condition caused by an imbalance of lymphatic flow. Upper extremity lymphedema has been reported in 16-40% of breast cancer patients following axillary lymph node dissection. Furthermore, lymphedema following sentinel lymph node biopsy alone has been reported in 3.5% of patients. While the disease process is not new, there has been significant progress in the surgical care of lymphedema that can offer alternatives and improvements in management. The purpose of this review is to provide a comprehensive update and overview of the current advances and surgical treatment options for upper extremity lymphedema.", "title": "" }, { "docid": "neg:1840240_1", "text": "Memristive synapses, the most promising passive devices for synaptic interconnections in artificial neural networks, are the driving force behind recent research on hardware neural networks. Despite significant efforts to utilize memristive synapses, progress to date has only shown the possibility of building a neural network system that can classify simple image patterns. In this article, we report a high-density cross-point memristive synapse array with improved synaptic characteristics. The proposed PCMO-based memristive synapse exhibits the necessary gradual and symmetrical conductance changes, and has been successfully adapted to a neural network system. The system learns, and later recognizes, the human thought pattern corresponding to three vowels, i.e. /a /, /i /, and /u/, using electroencephalography signals generated while a subject imagines speaking vowels. Our successful demonstration of a neural network system for EEG pattern recognition is likely to intrigue many researchers and stimulate a new research direction.", "title": "" }, { "docid": "neg:1840240_2", "text": "This paper proposes a joint multi-task learning algorithm to better predict attributes in images using deep convolutional neural networks (CNN). We consider learning binary semantic attributes through a multi-task CNN model, where each CNN will predict one binary attribute. The multi-task learning allows CNN models to simultaneously share visual knowledge among different attribute categories. Each CNN will generate attribute-specific feature representations, and then we apply multi-task learning on the features to predict their attributes. In our multi-task framework, we propose a method to decompose the overall model's parameters into a latent task matrix and combination matrix. Furthermore, under-sampled classifiers can leverage shared statistics from other classifiers to improve their performance. Natural grouping of attributes is applied such that attributes in the same group are encouraged to share more knowledge. Meanwhile, attributes in different groups will generally compete with each other, and consequently share less knowledge. We show the effectiveness of our method on two popular attribute datasets.", "title": "" }, { "docid": "neg:1840240_3", "text": "Knowledge graphs are large graph-structured databases of facts, which typically suffer from incompleteness. Link prediction is the task of inferring missing relations (links) between entities (nodes) in a knowledge graph. We approach this task using a hypernetwork architecture to generate convolutional layer filters specific to each relation and apply those filters to the subject entity embeddings. This architecture enables a trade-off between non-linear expressiveness and the number of parameters to learn. Our model simplifies the entity and relation embedding interactions introduced by the predecessor convolutional model, while outperforming all previous approaches to link prediction across all standard link prediction datasets.", "title": "" }, { "docid": "neg:1840240_4", "text": "Effective and accurate diagnosis of Alzheimer's disease (AD), as well as its prodromal stage (i.e., mild cognitive impairment (MCI)), has attracted more and more attention recently. So far, multiple biomarkers have been shown to be sensitive to the diagnosis of AD and MCI, i.e., structural MR imaging (MRI) for brain atrophy measurement, functional imaging (e.g., FDG-PET) for hypometabolism quantification, and cerebrospinal fluid (CSF) for quantification of specific proteins. However, most existing research focuses on only a single modality of biomarkers for diagnosis of AD and MCI, although recent studies have shown that different biomarkers may provide complementary information for the diagnosis of AD and MCI. In this paper, we propose to combine three modalities of biomarkers, i.e., MRI, FDG-PET, and CSF biomarkers, to discriminate between AD (or MCI) and healthy controls, using a kernel combination method. Specifically, ADNI baseline MRI, FDG-PET, and CSF data from 51AD patients, 99 MCI patients (including 43 MCI converters who had converted to AD within 18 months and 56 MCI non-converters who had not converted to AD within 18 months), and 52 healthy controls are used for development and validation of our proposed multimodal classification method. In particular, for each MR or FDG-PET image, 93 volumetric features are extracted from the 93 regions of interest (ROIs), automatically labeled by an atlas warping algorithm. For CSF biomarkers, their original values are directly used as features. Then, a linear support vector machine (SVM) is adopted to evaluate the classification accuracy, using a 10-fold cross-validation. As a result, for classifying AD from healthy controls, we achieve a classification accuracy of 93.2% (with a sensitivity of 93% and a specificity of 93.3%) when combining all three modalities of biomarkers, and only 86.5% when using even the best individual modality of biomarkers. Similarly, for classifying MCI from healthy controls, we achieve a classification accuracy of 76.4% (with a sensitivity of 81.8% and a specificity of 66%) for our combined method, and only 72% even using the best individual modality of biomarkers. Further analysis on MCI sensitivity of our combined method indicates that 91.5% of MCI converters and 73.4% of MCI non-converters are correctly classified. Moreover, we also evaluate the classification performance when employing a feature selection method to select the most discriminative MR and FDG-PET features. Again, our combined method shows considerably better performance, compared to the case of using an individual modality of biomarkers.", "title": "" }, { "docid": "neg:1840240_5", "text": "Fog computing based radio access network is a promising paradigm for the fifth generation wireless communication system to provide high spectral and energy efficiency. With the help of the new designed fog computing based access points (F-APs), the user-centric objectives can be achieved through the adaptive technique and will relieve the load of fronthaul and alleviate the burden of base band unit pool. In this paper, we derive the coverage probability and ergodic rate for both F-AP users and device-to-device users by taking into account the different nodes locations, cache sizes as well as user access modes. Particularly, the stochastic geometry tool is used to derive expressions for above performance metrics. Simulation results validate the accuracy of our analysis and we obtain interesting tradeoffs that depend on the effect of the cache size, user node density, and the quality of service constrains on the different performance metrics.", "title": "" }, { "docid": "neg:1840240_6", "text": "Social networks are of significant analytical interest. This is because their data are generated in great quantity, and intermittently, besides that, the data are from a wide variety, and it is widely available to users. Through such data, it is desired to extract knowledge or information that can be used in decision-making activities. In this context, we have identified the lack of methods that apply data mining techniques to the task of analyzing the professional profile of employees. The aim of such analyses is to detect competencies that are of greater interest by being more required and also, to identify their associative relations. Thus, this work introduces MineraSkill methodology that deals with methods to infer the desired profile of a candidate for a job vacancy. In order to do so, we use keyword detection via natural language processing techniques; which are related to others by inferring their association rules. The results are presented in the form of a case study, which analyzed data from LinkedIn, demonstrating the potential of the methodology in indicating trending competencies that are required together.", "title": "" }, { "docid": "neg:1840240_7", "text": "Middle ear surgery is strongly influenced by anatomical and functional characteristics of the middle ear. The complex anatomy means a challenge for the otosurgeon who moves between preservation or improvement of highly important functions (hearing, balance, facial motion) and eradication of diseases. Of these, perforations of the tympanic membrane, chronic otitis media, tympanosclerosis and cholesteatoma are encountered most often in clinical practice. Modern techniques for reconstruction of the ossicular chain aim for best possible hearing improvement using delicate alloplastic titanium prostheses, but a number of prosthesis-unrelated factors work against this intent. Surgery is always individualized to the case and there is no one-fits-all strategy. Above all, both middle ear diseases and surgery can be associated with a number of complications; the most important ones being hearing deterioration or deafness, dizziness, facial palsy and life-threatening intracranial complications. To minimize risks, a solid knowledge of and respect for neurootologic structures is essential for an otosurgeon who must train him- or herself intensively on temporal bones before performing surgery on a patient.", "title": "" }, { "docid": "neg:1840240_8", "text": "We partially replicate and extend Shepard, Hovland, and Jenkins's (1961) classic study of task difficulty for learning six fundamental types of rule-based categorization problems. Our main results mirrored those of Shepard et al., with the ordering of task difficulty being the same as in the original study. A much richer data set was collected, however, which enabled the generation of block-by-block learning curves suitable for quantitative fitting. Four current computational models of classification learning were fitted to the learning data: ALCOVE (Kruschke, 1992), the rational model (Anderson, 1991), the configural-cue model (Gluck & Bower, 1988b), and an extended version of the configural-cue model with dimensionalized, adaptive learning rate mechanisms. Although all of the models captured important qualitative aspects of the learning data, ALCOVE provided the best overall quantitative fit. The results suggest the need to incorporate some form of selective attention to dimensions in category-learning models based on stimulus generalization and cue conditioning.", "title": "" }, { "docid": "neg:1840240_9", "text": "There is a growing need for real-time human pose estimation from monocular RGB images in applications such as human computer interaction, assisted living, video surveillance, people tracking, activity recognition and motion capture. For the task, depth sensors and multi-camera systems are usually more expensive and difficult to set up than conventional RGB video cameras. Recent advances in convolutional neural network research have allowed to replace of traditional methods with more efficient convolutional neural network based methods in many computer vision tasks. This thesis presents a method for real-time multi-person human pose estimation from video by utilizing convolutional neural networks. The method is aimed for use case specific applications, where good accuracy is essential and variation of the background and poses is limited. This enables to use a generic network architecture, which is both accurate and fast. The problem is divided into two phases: (1) pretraining and (2) fine-tuning. In pretraining, the network is learned with highly diverse input data from publicly available datasets, while in fine-tuning it is trained with application specific data recorded with Kinect. The method considers the whole system, including person detector, pose estimator and an automatic way to record application specific training material for fine-tuning. The method can be also thought of as a replacement for Kinect, and it can be used for higher level tasks such as gesture control, games, person tracking and action recognition.", "title": "" }, { "docid": "neg:1840240_10", "text": "Interdental cleaning is an important part of a patient's personal oral care regimen. Water flossers, also known as oral irrigators or dental water jets, can play a vital, effective role in interdental hygiene. Evidence has shown a significant reduction in plaque biofilm from tooth surfaces and the reduction of subgingival pathogenic bacteria from pockets as deep as 6 mm with the use of water flossing. In addition, water flossers have been shown to reduce gingivitis, bleeding, probing pocket depth, host inflammatory mediators, and calculus. Educating patients on the use of a water flosser as part of their oral hygiene routine can be a valuable tool in maintaining oral health.", "title": "" }, { "docid": "neg:1840240_11", "text": "In recent years, research on reading-compr question and answering has drawn intense attention in Language Processing. However, it is still a key issue to the high-level semantic vector representation of quest paragraph. Drawing inspiration from DrQA [1], wh question and answering system proposed by Facebook, tl proposes an attention-based question and answering 11 adds the binary representation of the paragraph, the par; attention to the question, and the question's attentioi paragraph. Meanwhile, a self-attention calculation m proposed to enhance the question semantic vector reption. Besides, it uses a multi-layer bidirectional Lon: Term Memory(BiLSTM) networks to calculate the h semantic vector representations of paragraphs and q Finally, bilinear functions are used to calculate the pr of the answer's position in the paragraph. The expe results on the Stanford Question Answering Dataset(SQl development set show that the F1 score is 80.1% and tl 71.4%, which demonstrates that the performance of the is better than that of the model of DrQA, since they inc 2% and 1.3% respectively.", "title": "" }, { "docid": "neg:1840240_12", "text": "End-to-end congestion control mechanisms have been critical to the robustness and stability of the Internet. Most of today’s Internet traffic is TCP, and we expect this to remain so in the future. Thus, having “TCP-friendly” behavior is crucial for new applications. However, the emergence of non-congestion-controlled realtime applications threatens unfairness to competing TCP traffic and possible congestion collapse. We present an end-to-end TCP-friendly Rate Adaptation Protocol (RAP), which employs an additive-increase, multiplicativedecrease (AIMD) algorithm. It is well suited for unicast playback of realtime streams and other semi-reliable rate-based applications. Its primary goal is to be fair and TCP-friendly while separating network congestion control from application-level reliability. We evaluate RAP through extensive simulation, and conclude that bandwidth is usually evenly shared between TCP and RAP traffic. Unfairness to TCP traffic is directly determined by how TCP diverges from the AIMD algorithm. Basic RAP behaves in a TCPfriendly fashion in a wide range of likely conditions, but we also devised a fine-grain rate adaptation mechanism to extend this range further. Finally, we show that deploying RED queue management can result in an ideal fairness between TCP and RAP traffic.", "title": "" }, { "docid": "neg:1840240_13", "text": "Call-Exner bodies are present in ovarian follicles of a range of species including human and rabbit, and in a range of human ovarian tumors. We have also found structures resembling Call-Exner bodies in bovine preantral and small antral follicles. Hematoxylin and eosin staining of single sections of bovine ovaries has shown that 30% of preantral follicles with more than one layer of granulosa cells and 45% of small (less than 650 μm) antral follicles have at least one Call-Exner body composed of a spherical eosinophilic region surrounded by a rosette of granulosa cells. Alcian blue stains the spherical eosinophilic region of the Call-Exner bodies. Electron microscopy has demonstrated that some Call-Exner bodies contain large aggregates of convoluted basal lamina, whereas others also contain regions of unassembled basal-lamina-like material. Individual chains of the basal lamina components type IV collagen (α1 to α5) and laminin (α1, β2 and δ1) have been immunolocalized to Call-Exner bodies in sections of fresh-frozen ovaries. Bovine Call-Exner bodies are presumably analogous to Call-Exner bodies in other species but are predominantly found in preantral and small antral follicles, rather than large antral follicles. With follicular development, the basal laminae of Call-Exner bodies change in their apparent ratio of type IV collagen to laminin, similar to changes observed in the follicular basal lamina, suggesting that these structures have a common cellular origin.", "title": "" }, { "docid": "neg:1840240_14", "text": "JaTeCS is an open source Java library that supports research on automatic text categorization and other related problems, such as ordinal regression and quantification, which are of special interest in opinion mining applications. It covers all the steps of an experimental activity, from reading the corpus to the evaluation of the experimental results. As JaTeCS is focused on text as the main input data, it provides the user with many text-dedicated tools, e.g.: data readers for many formats, including the most commonly used text corpora and lexical resources, natural language processing tools, multi-language support, methods for feature selection and weighting, the implementation of many machine learning algorithms as well as wrappers for well-known external software (e.g., SVMlight) which enable their full control from code. JaTeCS support its expansion by abstracting through interfaces many of the typical tools and procedures used in text processing tasks. The library also provides a number of “template” implementations of typical experimental setups (e.g., train-test, k-fold validation, grid-search optimization, randomized runs) which enable fast realization of experiments just by connecting the templates with data readers, learning algorithms and evaluation measures.", "title": "" }, { "docid": "neg:1840240_15", "text": "Context: Client-side JavaScript is widely used in web applications to improve user-interactivity and minimize client-server communications. Unfortunately, web applications are prone to JavaScript faults. While prior studies have demonstrated the prevalence of these faults, no attempts have been made to determine their root causes and consequences. Objective: The goal of our study is to understand the root causes and impact of JavaScript faults and how the results can impact JavaScript programmers, testers and tool developers. Method: We perform an empirical study of 317 bug reports from 12 bug repositories. The bug reports are thoroughly examined to classify and extract information about the fault's cause (the error) and consequence (the failure and impact). Result: The majority (65%) of JavaScript faults are DOM-related, meaning they are caused by faulty interactions of the JavaScript code with the Document Object Model (DOM). Further, 80% of the highest impact JavaScript faults are DOM-related. Finally, most JavaScript faults originate from programmer mistakes committed in the JavaScript code itself, as opposed to other web application components such as the server-side or HTML code. Conclusion: Given the prevalence of DOM-related faults, JavaScript programmers need development tools that can help them reason about the DOM. Also, testers should prioritize detection of DOM-related faults as most high impact faults belong to this category. Finally, developers can use the error patterns we found to design more powerful static analysis tools for JavaScript.", "title": "" }, { "docid": "neg:1840240_16", "text": "Antisocial behavior is a socially maladaptive and harmful trait to possess. This can be especially injurious for a child who is raised by a parent with this personality structure. The pathology of antisocial behavior implies traits such as deceitfulness, irresponsibility, unreliability, and an incapability to feel guilt, remorse, or even love. This is damaging to a child’s emotional, cognitive, and social development. Parents with this personality makeup can leave a child traumatized, empty, and incapable of forming meaningful personal relationships. Both genetic and environmental factors influence the development of antisocial behavior. Moreover, the child with a genetic predisposition to antisocial behavior who is raised with a parental style that triggers the genetic liability is at high risk for developing the same personality structure. Antisocial individuals are impulsive, irritable, and often have no concerns over their purported responsibilities. As a parent, this can lead to erratic discipline, neglectful parenting, and can undermine effective care giving. This paper will focus on the implications of parents with antisocial behavior and the impact that this behavior has on attachment as well as on the development of antisocial traits in children.", "title": "" }, { "docid": "neg:1840240_17", "text": "To discover patterns in historical data, climate scientists have applied various clustering methods with the goal of identifying regions that share some common climatological behavior. However, past approaches are limited by the fact that they either consider only a single time period (snapshot) of multivariate data, or they consider only a single variable by using the time series data as multi-dimensional feature vector. In both cases, potentially useful information may be lost. Moreover, clusters in high-dimensional data space can be difficult to interpret, prompting the need for a more effective data representation. We address both of these issues by employing a complex network (graph) to represent climate data, a more intuitive model that can be used for analysis while also having a direct mapping to the physical world for interpretation. A cross correlation function is used to weight network edges, thus respecting the temporal nature of the data, and a community detection algorithm identifies multivariate clusters. Examining networks for consecutive periods allows us to study structural changes over time. We show that communities have a climatological interpretation and that disturbances in structure can be an indicator of climate events (or lack thereof). Finally, we discuss how this model can be applied for the discovery of more complex concepts such as unknown teleconnections or the development of multivariate climate indices and predictive insights.", "title": "" }, { "docid": "neg:1840240_18", "text": "Research and development of hip stem implants started centuries ago. However, there is still no yet an optimum design that fulfills all the requirements of the patient. New manufacturing technologies have opened up new possibilities for complicated theoretical designs to become tangible reality. Current trends in the development of hip stems focus on applying porous structures to improve osseointegration and reduce stem stiffness in order to approach the stiffness of the natural human bone. In this field, modern additive manufacturing machines offer unique flexibility in manufacturing parts combining variable density mesh structures with solid and porous metal in a single manufacturing process. Furthermore, additive manufacturing machines became powerful competitors in the economical mass production of hip implants. This is due to their ability to manufacture several parts with different geometries in a single setup and with minimum material consumption. This paper reviews the application of additive manufacturing (AM) techniques in the production of innovative porous femoral hip stem design.", "title": "" } ]
1840241
Resilience : The emergence of a perspective for social – ecological systems analyses
[ { "docid": "pos:1840241_0", "text": "■ Abstract We explore the social dimension that enables adaptive ecosystem-based management. The review concentrates on experiences of adaptive governance of socialecological systems during periods of abrupt change (crisis) and investigates social sources of renewal and reorganization. Such governance connects individuals, organizations, agencies, and institutions at multiple organizational levels. Key persons provide leadership, trust, vision, meaning, and they help transform management organizations toward a learning environment. Adaptive governance systems often self-organize as social networks with teams and actor groups that draw on various knowledge systems and experiences for the development of a common understanding and policies. The emergence of “bridging organizations” seem to lower the costs of collaboration and conflict resolution, and enabling legislation and governmental policies can support self-organization while framing creativity for adaptive comanagement efforts. A resilient social-ecological system may make use of crisis as an opportunity to transform into a more desired state.", "title": "" } ]
[ { "docid": "neg:1840241_0", "text": "Bluetooth Low Energy (BLE) is ideally suited to exchange information between mobile devices and Internet-of-Things (IoT) sensors. It is supported by most recent consumer mobile devices and can be integrated into sensors enabling them to exchange information in an energy-efficient manner. However, when BLE is used to access or modify sensitive sensor parameters, exchanged messages need to be suitably protected, which may not be possible with the security mechanisms defined in the BLE specification. Consequently we contribute BALSA, a set of cryptographic protocols, a BLE service and a suggested usage architecture aiming to provide a suitable level of security. In this paper we define and analyze these components and describe our proof-of-concept, which demonstrates the feasibility and benefits of BALSA.", "title": "" }, { "docid": "neg:1840241_1", "text": "Facial alignment involves finding a set of landmark points on an image with a known semantic meaning. However, this semantic meaning of landmark points is often lost in 2D approaches where landmarks are either moved to visible boundaries or ignored as the pose of the face changes. In order to extract consistent alignment points across large poses, the 3D structure of the face must be considered in the alignment step. However, extracting a 3D structure from a single 2D image usually requires alignment in the first place. We present our novel approach to simultaneously extract the 3D shape of the face and the semantically consistent 2D alignment through a 3D Spatial Transformer Network (3DSTN) to model both the camera projection matrix and the warping parameters of a 3D model. By utilizing a generic 3D model and a Thin Plate Spline (TPS) warping function, we are able to generate subject specific 3D shapes without the need for a large 3D shape basis. In addition, our proposed network can be trained in an end-to-end frame-work on entirely synthetic data from the 300W-LP dataset. Unlike other 3D methods, our approach only requires one pass through the network resulting in a faster than real-time alignment. Evaluations of our model on the Annotated Facial Landmarks in the Wild (AFLW) and AFLW2000-3D datasets show our method achieves state-of-the-art performance over other 3D approaches to alignment.", "title": "" }, { "docid": "neg:1840241_2", "text": "The Web has become an excellent source for gathering consumer opinions. There are now numerous Web sources containing such opinions, e.g., product reviews, forums, discussion groups, and blogs. Techniques are now being developed to exploit these sources to help organizations and individuals to gain such important information easily and quickly. In this paper, we first discuss several aspects of the problem in the AI context, and then present some results of our existing work published in KDD-04 and WWW-05.", "title": "" }, { "docid": "neg:1840241_3", "text": "We introduce an ac transconductance dispersion method (ACGD) to profile the oxide traps in an MOSFET without needing a body contact. The method extracts the spatial distribution of oxide traps from the frequency dependence of transconductance, which is attributed to charge trapping as modulated by an ac gate voltage. The results from this method have been verified by the use of the multifrequency charge pumping (MFCP) technique. In fact, this method complements the MFCP technique in terms of the trap depth that each method is capable of probing. We will demonstrate the method with InP passivated InGaAs substrates, along with electrically stressed Si N-MOSFETs.", "title": "" }, { "docid": "neg:1840241_4", "text": "Integration of optical communication circuits directly into high-performance microprocessor chips can enable extremely powerful computer systems. A germanium photodetector that can be monolithically integrated with silicon transistor technology is viewed as a key element in connecting chip components with infrared optical signals. Such a device should have the capability to detect very-low-power optical signals at very high speed. Although germanium avalanche photodetectors (APD) using charge amplification close to avalanche breakdown can achieve high gain and thus detect low-power optical signals, they are universally considered to suffer from an intolerably high amplification noise characteristic of germanium. High gain with low excess noise has been demonstrated using a germanium layer only for detection of light signals, with amplification taking place in a separate silicon layer. However, the relatively thick semiconductor layers that are required in such structures limit APD speeds to about 10 GHz, and require excessively high bias voltages of around 25 V (ref. 12). Here we show how nanophotonic and nanoelectronic engineering aimed at shaping optical and electrical fields on the nanometre scale within a germanium amplification layer can overcome the otherwise intrinsically poor noise characteristics, achieving a dramatic reduction of amplification noise by over 70 per cent. By generating strongly non-uniform electric fields, the region of impact ionization in germanium is reduced to just 30 nm, allowing the device to benefit from the noise reduction effects that arise at these small distances. Furthermore, the smallness of the APDs means that a bias voltage of only 1.5 V is required to achieve an avalanche gain of over 10 dB with operational speeds exceeding 30 GHz. Monolithic integration of such a device into computer chips might enable applications beyond computer optical interconnects—in telecommunications, secure quantum key distribution, and subthreshold ultralow-power transistors.", "title": "" }, { "docid": "neg:1840241_5", "text": "Provenance refers to the entire amount of information, comprising all the elements and their relationships, that contribute to the existence of a piece of data. The knowledge of provenance data allows a great number of benefits such as verifying a product, result reproductivity, sharing and reuse of knowledge, or assessing data quality and validity. With such tangible benefits, it is no wonder that in recent years, research on provenance has grown exponentially, and has been applied to a wide range of different scientific disciplines. Some years ago, managing and recording provenance information were performed manually. Given the huge volume of information available nowadays, the manual performance of such tasks is no longer an option. The problem of systematically performing tasks such as the understanding, capture and management of provenance has gained significant attention by the research community and industry over the past decades. As a consequence, there has been a huge amount of contributions and proposed provenance systems as solutions for performing such kinds of tasks. The overall objective of this paper is to plot the landscape of published systems in the field of provenance, with two main purposes. First, we seek to evaluate the desired characteristics that provenance systems are expected to have. Second, we aim at identifying a set of representative systems (both early and recent use) to be exhaustively analyzed according to such characteristics. In particular, we have performed a systematic literature review of studies, identifying a comprehensive set of 105 relevant resources in all. The results show that there are common aspects or characteristics of provenance systems thoroughly renowned throughout the literature on the topic. Based on these results, we have defined a six-dimensional taxonomy of provenance characteristics attending to: general aspects, data capture, data access, subject, storage, and non-functional aspects. Additionally, the study has found that there are 25 most referenced provenance systems within the provenance context. This study exhaustively analyzes and compares such systems attending to our taxonomy and pinpoints future directions.", "title": "" }, { "docid": "neg:1840241_6", "text": "This paper presents, for the first time, a novel pupil detection method for near-infrared head-mounted cameras, which relies not only on image appearance to pursue the shape and gradient variation of the pupil contour, but also on structure principle to explore the mechanism of pupil projection. There are three main characteristics in the proposed method. First, in order to complement the pupil projection information, an eyeball center calibration method is proposed to build an eye model. Second, by utilizing the deformation model of pupils under head-mounted cameras and the edge gradients of a circular pattern, we find the best fitting ellipse describing the pupil boundary. Third, an eye-model-based pupil fitting algorithm with only three parameters is proposed to fine-tune the final pupil contour. Consequently, the proposed method extracts the geometry-appearance information, effectively boosting the performance of pupil detection. Experimental results show that this method outperforms the state-of-the-art ones. On a widely used public database (LPW), our method achieves 72.62% in terms of detection rate up to an error of five pixels, which is superior to the previous best one.", "title": "" }, { "docid": "neg:1840241_7", "text": "AIM\nThe aim of this paper is to distinguish the integrative review method from other review methods and to propose methodological strategies specific to the integrative review method to enhance the rigour of the process.\n\n\nBACKGROUND\nRecent evidence-based practice initiatives have increased the need for and the production of all types of reviews of the literature (integrative reviews, systematic reviews, meta-analyses, and qualitative reviews). The integrative review method is the only approach that allows for the combination of diverse methodologies (for example, experimental and non-experimental research), and has the potential to play a greater role in evidence-based practice for nursing. With respect to the integrative review method, strategies to enhance data collection and extraction have been developed; however, methods of analysis, synthesis, and conclusion drawing remain poorly formulated.\n\n\nDISCUSSION\nA modified framework for research reviews is presented to address issues specific to the integrative review method. Issues related to specifying the review purpose, searching the literature, evaluating data from primary sources, analysing data, and presenting the results are discussed. Data analysis methods of qualitative research are proposed as strategies that enhance the rigour of combining diverse methodologies as well as empirical and theoretical sources in an integrative review.\n\n\nCONCLUSION\nAn updated integrative review method has the potential to allow for diverse primary research methods to become a greater part of evidence-based practice initiatives.", "title": "" }, { "docid": "neg:1840241_8", "text": "The proliferation of wearable devices, e.g., smartwatches and activity trackers, with embedded sensors has already shown its great potential on monitoring and inferring human daily activities. This paper reveals a serious security breach of wearable devices in the context of divulging secret information (i.e., key entries) while people accessing key-based security systems. Existing methods of obtaining such secret information relies on installations of dedicated hardware (e.g., video camera or fake keypad), or training with labeled data from body sensors, which restrict use cases in practical adversary scenarios. In this work, we show that a wearable device can be exploited to discriminate mm-level distances and directions of the user's fine-grained hand movements, which enable attackers to reproduce the trajectories of the user's hand and further to recover the secret key entries. In particular, our system confirms the possibility of using embedded sensors in wearable devices, i.e., accelerometers, gyroscopes, and magnetometers, to derive the moving distance of the user's hand between consecutive key entries regardless of the pose of the hand. Our Backward PIN-Sequence Inference algorithm exploits the inherent physical constraints between key entries to infer the complete user key entry sequence. Extensive experiments are conducted with over 5000 key entry traces collected from 20 adults for key-based security systems (i.e. ATM keypads and regular keyboards) through testing on different kinds of wearables. Results demonstrate that such a technique can achieve 80% accuracy with only one try and more than 90% accuracy with three tries, which to our knowledge, is the first technique that reveals personal PINs leveraging wearable devices without the need for labeled training data and contextual information.", "title": "" }, { "docid": "neg:1840241_9", "text": "Standardization of transanal total mesorectal excision requires the delineation of the principal procedural components before implementation in practice. This technique is a bottom-up approach to a proctectomy with the goal of a complete mesorectal excision for optimal outcomes of oncologic treatment. A detailed stepwise description of the approach with technical pearls is provided to optimize one's understanding of this technique and contribute to reducing the inherent risk of beginning a new procedure. Surgeons should be trained according to standardized pathways including online preparation, observational or hands-on courses as well as the potential for proctorship of early cases experiences. Furthermore, technological pearls with access to the \"video-in-photo\" (VIP) function, allow surgeons to link some of the images in this article to operative demonstrations of certain aspects of this technique.", "title": "" }, { "docid": "neg:1840241_10", "text": "Given an edge-weighted graph G with a set $$Q$$ Q of k terminals, a mimicking network is a graph with the same set of terminals that exactly preserves the size of minimum cut between any partition of the terminals. A natural question in the area of graph compression is to provide as small mimicking networks as possible for input graph G being either an arbitrary graph or coming from a specific graph class. We show an exponential lower bound for cut mimicking networks in planar graphs: there are edge-weighted planar graphs with k terminals that require $$2^{k-2}$$ 2k-2 edges in any mimicking network. This nearly matches an upper bound of $$\\mathcal {O}(k 2^{2k})$$ O(k22k) of Krauthgamer and Rika (in: Khanna (ed) Proceedings of the twenty-fourth annual ACM-SIAM symposium on discrete algorithms, SODA 2013, New Orleans, 2013) and is in sharp contrast with the upper bounds of $$\\mathcal {O}(k^2)$$ O(k2) and $$\\mathcal {O}(k^4)$$ O(k4) under the assumption that all terminals lie on a single face (Goranci et al., in: Pruhs and Sohler (eds) 25th Annual European symposium on algorithms (ESA 2017), 2017, arXiv:1702.01136; Krauthgamer and Rika in Refined vertex sparsifiers of planar graphs, 2017, arXiv:1702.05951). As a side result we show a tight example for double-exponential upper bounds given by Hagerup et al. (J Comput Syst Sci 57(3):366–375, 1998), Khan and Raghavendra (Inf Process Lett 114(7):365–371, 2014), and Chambers and Eppstein (J Gr Algorithms Appl 17(3):201–220, 2013).", "title": "" }, { "docid": "neg:1840241_11", "text": "The stator permanent magnet (PM) machines have simple and robust rotor structure as well as high torque density. The hybrid excitation topology can realize flux regulation and wide constant power operating capability of the stator PM machines when used in dc power systems. This paper compares and analyzes the electromagnetic performance of different hybrid excitation stator PM machines according to different combination modes of PMs, excitation winding, and iron flux bridge. Then, the control strategies for voltage regulation of dc power systems are discussed based on different critical control variables including the excitation current, the armature current, and the electromagnetic torque. Furthermore, an improved direct torque control (DTC) strategy is investigated to improve system performance. A parallel hybrid excitation flux-switching generator employing the improved DTC which shows excellent dynamic and steady-state performance has been achieved experimentally.", "title": "" }, { "docid": "neg:1840241_12", "text": "In this paper, we consider the grayscale template-matching problem, invariant to rotation, scale, translation, brightness and contrast, without previous operations that discard grayscale information, like detection of edges, detection of interest points or segmentation/binarization of the images. The obvious “brute force” solution performs a series of conventional template matchings between the image to analyze and the template query shape rotated by every angle, translated to every position and scaled by every factor (within some specified range of scale factors). Clearly, this takes too long and thus is not practical. We propose a technique that substantially accelerates this searching, while obtaining the same result as the original brute force algorithm. In some experiments, our algorithm was 400 times faster than the brute force algorithm. Our algorithm consists of three cascaded filters. These filters successively exclude pixels that have no chance of matching the template from further processing.", "title": "" }, { "docid": "neg:1840241_13", "text": "Recently, increasing attention has been drawn to training semantic segmentation models using synthetic data and computer-generated annotation. However, domain gap remains a major barrier and prevents models learned from synthetic data from generalizing well to real-world applications. In this work, we take the advantage of additional geometric information from synthetic data, a powerful yet largely neglected cue, to bridge the domain gap. Such geometric information can be generated easily from synthetic data, and is proven to be closely coupled with semantic information. With the geometric information, we propose a model to reduce domain shift on two levels: on the input level, we augment the traditional image translation network with the additional geometric information to translate synthetic images into realistic styles; on the output level, we build a task network which simultaneously performs depth estimation and semantic segmentation on the synthetic data. Meanwhile, we encourage the network to preserve the correlation between depth and semantics by adversarial training on the output space. We then validate our method on two pairs of synthetic to real dataset: Virtual KITTI→KITTI, and SYNTHIA→Cityscapes, where we achieve a significant performance gain compared to the non-adaptive baseline and methods without using geometric information. This demonstrates the usefulness of geometric information from synthetic data for cross-domain semantic segmentation.", "title": "" }, { "docid": "neg:1840241_14", "text": "A wide range of research has used face data to estimate a person's engagement, in applications from advertising to student learning. An interesting and important question not addressed in prior work is if face-based models of engagement are generalizable and context-free, or do engagement models depend on context and task. This research shows that context-sensitive face-based engagement models are more accurate, at least in the space of web-based tools for trauma recovery. Estimating engagement is important as various psychological studies indicate that engagement is a key component to measure the effectiveness of treatment and can be predictive of behavioral outcomes in many applications. In this paper, we analyze user engagement in a trauma-recovery regime during two separate modules/tasks: relaxation and triggers. The dataset comprises of 8M+ frames from multiple videos collected from 110 subjects, with engagement data coming from 800+ subject self-reports. We build an engagement prediction model as sequence learning from facial Action Units (AUs) using Long Short Term Memory (LSTMs). Our experiments demonstrate that engagement prediction is contextual and depends significantly on the allocated task. Models trained to predict engagement on one task are only weak predictors for another and are much less accurate than context-specific models. Further, we show the interplay of subject mood and engagement using a very short version of Profile of Mood States (POMS) to extend our LSTM model.", "title": "" }, { "docid": "neg:1840241_15", "text": "We present a bitsliced implementation of AES encryption in counter mode for 64-bit Intel processors. Running at 7.59 cycles/byte on a Core 2, it is up to 25% faster than previous implementations, while simultaneously offering protection against timing attacks. In particular, it is the only cache-timing-attack resistant implementation offering competitive speeds for stream as well as for packet encryption: for 576-byte packets, we improve performance over previous bitsliced implementations by more than a factor of 2. We also report more than 30% improved speeds for lookup-table based Galois/Counter mode authentication, achieving 10.68 cycles/byte for authenticated encryption. Furthermore, we present the first constant-time implementation of AES-GCM that has a reasonable speed of 21.99 cycles/byte, thus offering a full suite of timing-analysis resistant software for authenticated encryption.", "title": "" }, { "docid": "neg:1840241_16", "text": "Modern object detection methods typically rely on bounding box proposals as input. While initially popularized in the 2D case, this idea has received increasing attention for 3D bounding boxes. Nevertheless, existing 3D box proposal techniques all assume having access to depth as input, which is unfortunately not always available in practice. In this paper, we therefore introduce an approach to generating 3D box proposals from a single monocular RGB image. To this end, we develop an integrated, fully differentiable framework that inherently predicts a depth map, extracts a 3D volumetric scene representation and generates 3D object proposals. At the core of our approach lies a novel residual, differentiable truncated signed distance function module, which, accounting for the relatively low accuracy of the predicted depth map, extracts a 3D volumetric representation of the scene. Our experiments on the standard NYUv2 dataset demonstrate that our framework lets us generate high-quality 3D box proposals and that it outperforms the two-stage technique consisting of successively performing state-of-the-art depth prediction and depthbased 3D proposal generation.", "title": "" }, { "docid": "neg:1840241_17", "text": "Traditional approaches to the task of ACE event detection primarily regard multiple events in one sentence as independent ones and recognize them separately by using sentence-level information. However, events in one sentence are usually interdependent and sentence-level information is often insufficient to resolve ambiguities for some types of events. This paper proposes a novel framework dubbed as Hierarchical and Bias Tagging Networks with Gated Multi-level Attention Mechanisms (HBTNGMA) to solve the two problems simultaneously. Firstly, we propose a hierarchical and bias tagging networks to detect multiple events in one sentence collectively. Then, we devise a gated multi-level attention to automatically extract and dynamically fuse the sentence-level and document-level information. The experimental results on the widely used ACE 2005 dataset show that our approach significantly outperforms other state-of-the-art methods.", "title": "" }, { "docid": "neg:1840241_18", "text": "It , is generally unrecognized that Sigmund Freud's contribution to the scientific understanding of dreams derived from a radical reorientation to the dream experience. During the nineteenth century, before publication of The Interpretation of Dreams, the presence of dreaming was considered by the scientific community as a manifestation of mental activity during sleep. The state of sleep was given prominence as a factor accounting for the seeming lack of organization and meaning to the dream experience. Thus, the assumed relatively nonpsychological sleep state set the scientific stage for viewing the nature of the dream. Freud radically shifted the context. He recognized-as myth, folklore, and common sense had long understood-that dreams were also linked with the psychology of waking life. This shift in orientation has proved essential for our modern view of dreams and dreaming. Dreams are no longer dismissed as senseless notes hit at random on a piano keyboard by an untrained player. Dreams are now recognized as psychologically significant and meaningful expressions of the life of the dreamer, albeit expressed in disguised and concealed forms. (For a contrasting view, see AcFIIa ION_sYNTHESIS xxroTESis .) Contemporary Dream Research During the past quarter-century, there has been increasing scientific interest in the process of dreaming. A regular sleep-wakefulness cycle has been discovered, and if experimental subjects are awakened during periods of rapid eye movements (REM periods), they will frequently report dreams. In a typical night, four or five dreams occur during REM periods, accompanied by other signs of physiological activation, such as increased respiratory rate, heart rate, and penile and clitoral erection. Dreams usually last for the duration of the eye movements, from about 10 to 25 minutes. Although dreaming usually occurs in such regular cycles ;.dreaming may occur at other times during sleep, as well as during hypnagogic (falling asleep) or hypnopompic .(waking up) states, when REMs are not present. The above findings are discoveries made since the monumental work of Freud reported in The Interpretation of Dreams, and .although of great interest to the study of the mind-body problem, these .findings as yet bear only a peripheral relationship to the central concerns of the psychology of dream formation, the meaning of dream content, the dream as an approach to a deeper understanding of emotional life, and the use of the dream in psychoanalytic treatment .", "title": "" }, { "docid": "neg:1840241_19", "text": "BACKGROUND\nMusculoskeletal disorders (MSDs) that result from poor ergonomic design are one of the occupational disorders of greatest concern in the industrial sector. A key advantage in the primary design phase is to focus on a method of assessment that detects and evaluates the potential risks experienced by the operative when faced with these types of physical injuries. The method of assessment will improve the process design identifying potential ergonomic improvements from various design alternatives or activities undertaken as part of the cycle of continuous improvement throughout the differing phases of the product life cycle.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nThis paper presents a novel postural assessment method (NERPA) fit for product-process design, which was developed with the help of a digital human model together with a 3D CAD tool, which is widely used in the aeronautic and automotive industries. The power of 3D visualization and the possibility of studying the actual assembly sequence in a virtual environment can allow the functional performance of the parts to be addressed. Such tools can also provide us with an ergonomic workstation design, together with a competitive advantage in the assembly process.\n\n\nCONCLUSIONS\nThe method developed was used in the design of six production lines, studying 240 manual assembly operations and improving 21 of them. This study demonstrated the proposed method's usefulness and found statistically significant differences in the evaluations of the proposed method and the widely used Rapid Upper Limb Assessment (RULA) method.", "title": "" } ]
1840242
Effect of prebiotic intake on gut microbiota, intestinal permeability and glycemic control in children with type 1 diabetes: study protocol for a randomized controlled trial
[ { "docid": "pos:1840242_0", "text": "Recent evidence suggests that a particular gut microbial community may favour occurrence of the metabolic diseases. Recently, we reported that high-fat (HF) feeding was associated with higher endotoxaemia and lower Bifidobacterium species (spp.) caecal content in mice. We therefore tested whether restoration of the quantity of caecal Bifidobacterium spp. could modulate metabolic endotoxaemia, the inflammatory tone and the development of diabetes. Since bifidobacteria have been reported to reduce intestinal endotoxin levels and improve mucosal barrier function, we specifically increased the gut bifidobacterial content of HF-diet-fed mice through the use of a prebiotic (oligofructose [OFS]). Compared with normal chow-fed control mice, HF feeding significantly reduced intestinal Gram-negative and Gram-positive bacteria including levels of bifidobacteria, a dominant member of the intestinal microbiota, which is seen as physiologically positive. As expected, HF-OFS-fed mice had totally restored quantities of bifidobacteria. HF-feeding significantly increased endotoxaemia, which was normalised to control levels in HF-OFS-treated mice. Multiple-correlation analyses showed that endotoxaemia significantly and negatively correlated with Bifidobacterium spp., but no relationship was seen between endotoxaemia and any other bacterial group. Finally, in HF-OFS-treated-mice, Bifidobacterium spp. significantly and positively correlated with improved glucose tolerance, glucose-induced insulin secretion and normalised inflammatory tone (decreased endotoxaemia, plasma and adipose tissue proinflammatory cytokines). Together, these findings suggest that the gut microbiota contribute towards the pathophysiological regulation of endotoxaemia and set the tone of inflammation for occurrence of diabetes and/or obesity. Thus, it would be useful to develop specific strategies for modifying gut microbiota in favour of bifidobacteria to prevent the deleterious effect of HF-diet-induced metabolic diseases.", "title": "" } ]
[ { "docid": "neg:1840242_0", "text": "This paper focuses on a new task, i.e. transplanting a category-and-task-specific neural network to a generic, modular network without strong supervision. We design an functionally interpretable structure for the generic network. Like building LEGO blocks, we teach the generic network a new category by directly transplanting the module corresponding to the category from a pre-trained network with a few or even without sample annotations. Our method incrementally adds new categories to the generic network but does not affect representations of existing categories. In this way, our method breaks the typical bottleneck of learning a net for massive tasks and categories, i.e. the requirement of collecting samples for all tasks and categories at the same time before the learning begins. Thus, we use a new distillation algorithm, namely back-distillation, to overcome specific challenges of network transplanting. Our method without training samples even outperformed the baseline with 100 training samples.", "title": "" }, { "docid": "neg:1840242_1", "text": "Intelligent systems often depend on data provided by information agents, for example, sensor data or crowdsourced human computation. Providing accurate and relevant data requires costly effort that agents may not always be willing to provide. Thus, it becomes important not only to verify the correctness of data, but also to provide incentives so that agents that provide highquality data are rewarded while those that do not are discouraged by low rewards. We cover different settings and the assumptions they admit, including sensing, human computation, peer grading, reviews, and predictions. We survey different incentive mechanisms, including proper scoring rules, prediction markets and peer prediction, Bayesian Truth Serum, Peer Truth Serum, Correlated Agreement, and the settings where each of them would be suitable. As an alternative, we also consider reputation mechanisms. We complement the gametheoretic analysis with practical examples of applications in prediction platforms, community sensing, and peer grading.", "title": "" }, { "docid": "neg:1840242_2", "text": "The femoral head receives blood supply mainly from the deep branch of the medial femoral circumflex artery (MFCA). In previous studies we have performed anatomical dissections of 16 specimens and subsequently visualised the arteries supplying the femoral head in 55 healthy individuals. In this further radiological study we compared the arterial supply of the femoral head in 35 patients (34 men and one woman, mean age 37.1 years (16 to 64)) with a fracture/dislocation of the hip with a historical control group of 55 hips. Using CT angiography, we identified the three main arteries supplying the femoral head: the deep branch and the postero-inferior nutrient artery both arising from the MFCA, and the piriformis branch of the inferior gluteal artery. It was possible to visualise changes in blood flow after fracture/dislocation. Our results suggest that blood flow is present after reduction of the dislocated hip. The deep branch of the MFCA was patent and contrast-enhanced in 32 patients, and the diameter of this branch was significantly larger in the fracture/dislocation group than in the control group (p = 0.022). In a subgroup of ten patients with avascular necrosis (AVN) of the femoral head, we found a contrast-enhanced deep branch of the MFCA in eight hips. Two patients with no blood flow in any of the three main arteries supplying the femoral head developed AVN.", "title": "" }, { "docid": "neg:1840242_3", "text": "A novel non-parametric, multi-variate quickest detection method is proposed for cognitive radios (CRs) using both energy and cyclostationary features. The proposed approach can be used to track state dynamics of communication channels. This capability can be useful for both dynamic spectrum sharing (DSS) and future CRs, as in practice, centralized channel synchronization is unrealistic and the prior information of the statistics of channel usage is, in general, hard to obtain. The proposed multi-variate non-parametric average sample power and cyclostationarity-based quickest detection scheme is shown to achieve better performance compared to traditional energy-based schemes. We also develop a parallel on-line quickest detection/off-line change-point detection algorithm to achieve self-awareness of detection delays and false alarms for future automation. Compared to traditional energy-based quickest detection schemes, the proposed multi-variate non-parametric quickest detection scheme has comparable computational complexity. The simulated performance shows improvements in terms of small detection delays and significantly higher percentage of spectrum utilization.", "title": "" }, { "docid": "neg:1840242_4", "text": "BACKGROUND\nDetection of unknown risks with marketed medicines is key to securing the optimal care of individual patients and to reducing the societal burden from adverse drug reactions. Large collections of individual case reports remain the primary source of information and require effective analytics to guide clinical assessors towards likely drug safety signals. Disproportionality analysis is based solely on aggregate numbers of reports and naively disregards report quality and content. However, these latter features are the very fundament of the ensuing clinical assessment.\n\n\nOBJECTIVE\nOur objective was to develop and evaluate a data-driven screening algorithm for emerging drug safety signals that accounts for report quality and content.\n\n\nMETHODS\nvigiRank is a predictive model for emerging safety signals, here implemented with shrinkage logistic regression to identify predictive variables and estimate their respective contributions. The variables considered for inclusion capture different aspects of strength of evidence, including quality and clinical content of individual reports, as well as trends in time and geographic spread. A reference set of 264 positive controls (historical safety signals from 2003 to 2007) and 5,280 negative controls (pairs of drugs and adverse events not listed in the Summary of Product Characteristics of that drug in 2012) was used for model fitting and evaluation; the latter used fivefold cross-validation to protect against over-fitting. All analyses were performed on a reconstructed version of VigiBase(®) as of 31 December 2004, at around which time most safety signals in our reference set were emerging.\n\n\nRESULTS\nThe following aspects of strength of evidence were selected for inclusion into vigiRank: the numbers of informative and recent reports, respectively; disproportional reporting; the number of reports with free-text descriptions of the case; and the geographic spread of reporting. vigiRank offered a statistically significant improvement in area under the receiver operating characteristics curve (AUC) over screening based on the Information Component (IC) and raw numbers of reports, respectively (0.775 vs. 0.736 and 0.707, cross-validated).\n\n\nCONCLUSIONS\nAccounting for multiple aspects of strength of evidence has clear conceptual and empirical advantages over disproportionality analysis. vigiRank is a first-of-its-kind predictive model to factor in report quality and content in first-pass screening to better meet tomorrow's post-marketing drug safety surveillance needs.", "title": "" }, { "docid": "neg:1840242_5", "text": "Consideration of facial muscle dynamics is underappreciated among clinicians who provide injectable filler treatment. Injectable fillers are customarily used to fill static wrinkles, folds, and localized areas of volume loss, whereas neuromodulators are used to address excessive muscle movement. However, a more comprehensive understanding of the role of muscle function in facial appearance, taking into account biomechanical concepts such as the balance of activity among synergistic and antagonistic muscle groups, is critical to restoring facial appearance to that of a typical youthful individual with facial esthetic treatments. Failure to fully understand the effects of loss of support (due to aging or congenital structural deficiency) on muscle stability and interaction can result in inadequate or inappropriate treatment, producing an unnatural appearance. This article outlines these concepts to provide an innovative framework for an understanding of the role of muscle movement on facial appearance and presents cases that illustrate how modulation of muscle movement with injectable fillers can address structural deficiencies, rebalance abnormal muscle activity, and restore facial appearance. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.", "title": "" }, { "docid": "neg:1840242_6", "text": "This article presents VineSens, a hardware and software platform for supporting the decision-making of the vine grower. VineSens is based on a wireless sensor network system composed by autonomous and self-powered nodes that are deployed throughout a vineyard. Such nodes include sensors that allow us to obtain detailed knowledge on different viticulture processes. Thanks to the use of epidemiological models, VineSens is able to propose a custom control plan to prevent diseases like one of the most feared by vine growers: downy mildew. VineSens generates alerts that warn farmers about the measures that have to be taken and stores the historical weather data collected from different spots of the vineyard. Such data can then be accessed through a user-friendly web-based interface that can be accessed through the Internet by using desktop or mobile devices. VineSens was deployed at the beginning in 2016 in a vineyard in the Ribeira Sacra area (Galicia, Spain) and, since then, its hardware and software have been tested to prevent the development of downy mildew, showing during its first season that the system can led to substantial savings, to decrease the amount of phytosanitary products applied, and, as a consequence, to obtain a more ecologically sustainable and healthy wine.", "title": "" }, { "docid": "neg:1840242_7", "text": "Domain adaptation algorithms are useful when the distributions of the training and the test data are different. In this paper, we focus on the problem of instrumental variation and time-varying drift in the field of sensors and measurement, which can be viewed as discrete and continuous distributional change in the feature space. We propose maximum independence domain adaptation (MIDA) and semi-supervised MIDA to address this problem. Domain features are first defined to describe the background information of a sample, such as the device label and acquisition time. Then, MIDA learns a subspace which has maximum independence with the domain features, so as to reduce the interdomain discrepancy in distributions. A feature augmentation strategy is also designed to project samples according to their backgrounds so as to improve the adaptation. The proposed algorithms are flexible and fast. Their effectiveness is verified by experiments on synthetic datasets and four real-world ones on sensors, measurement, and computer vision. They can greatly enhance the practicability of sensor systems, as well as extend the application scope of existing domain adaptation algorithms by uniformly handling different kinds of distributional change.", "title": "" }, { "docid": "neg:1840242_8", "text": "Magnetic resonance images (MRI) play an important role in supporting and substituting clinical information in the diagnosis of multiple sclerosis (MS) disease by presenting lesion in brain MR images. In this paper, an algorithm for MS lesion segmentation from Brain MR Images has been presented. We revisit the modification of properties of fuzzy -c means algorithms and the canny edge detection. By changing and reformed fuzzy c-means clustering algorithms, and applying canny contraction principle, a relationship between MS lesions and edge detection is established. For the special case of FCM, we derive a sufficient condition and clustering parameters, allowing identification of them as (local) minima of the objective function.", "title": "" }, { "docid": "neg:1840242_9", "text": "Recent work in Information Retrieval (IR) using Deep Learning models has yielded state of the art results on a variety of IR tasks. Deep neural networks (DNN) are capable of learning ideal representations of data during the training process, removing the need for independently extracting features. However, the structures of these DNNs are often tailored to perform on specific datasets. In addition, IR tasks deal with text at varying levels of granularity from single factoids to documents containing thousands of words. In this paper, we examine the role of the granularity on the performance of common state of the art DNN structures in IR.", "title": "" }, { "docid": "neg:1840242_10", "text": "Although users’ preference is semantically reflected in the free-form review texts, this wealth of information was not fully exploited for learning recommender models. Specifically, almost all existing recommendation algorithms only exploit rating scores in order to find users’ preference, but ignore the review texts accompanied with rating information. In this paper, we propose a novel matrix factorization model (called TopicMF) which simultaneously considers the ratings and accompanied review texts. Experimental results on 22 real-world datasets show the superiority of our model over the state-of-the-art models, demonstrating its effectiveness for recommendation tasks.", "title": "" }, { "docid": "neg:1840242_11", "text": "A worldview (or “world view”) is a set of assumptions about physical and social reality that may have powerful effects on cognition and behavior. Lacking a comprehensive model or formal theory up to now, the construct has been underused. This article advances theory by addressing these gaps. Worldview is defined. Major approaches to worldview are critically reviewed. Lines of evidence are described regarding worldview as a justifiable construct in psychology. Worldviews are distinguished from schemas. A collated model of a worldview’s component dimensions is described. An integrated theory of worldview function is outlined, relating worldview to personality traits, motivation, affect, cognition, behavior, and culture. A worldview research agenda is outlined for personality and social psychology (including positive and peace psychology).", "title": "" }, { "docid": "neg:1840242_12", "text": "Hyponatremia is common in both inpatients and outpatients. Medications are often the cause of acute or chronic hyponatremia. Measuring the serum osmolality, urine sodium concentration and urine osmolality will help differentiate among the possible causes. Hyponatremia in the physical states of extracellular fluid (ECF) volume contraction and expansion can be easy to diagnose but often proves difficult to manage. In patients with these states or with normal or near-normal ECF volume, the syndrome of inappropriate secretion of antidiuretic hormone is a diagnosis of exclusion, requiring a thorough search for all other possible causes. Hyponatremia should be corrected at a rate similar to that at which it developed. When symptoms are mild, hyponatremia should be managed conservatively, with therapy aimed at removing the offending cause. When symptoms are severe, therapy should be aimed at more aggressive correction of the serum sodium concentration, typically with intravenous therapy in the inpatient setting.", "title": "" }, { "docid": "neg:1840242_13", "text": "A novel interleaved flyback converter with leakage energy recycled is proposed. The proposed converter is combined with dual-switch dual-transformer flyback topology. Two clamping diodes are used to reduce the voltage stress on power switches to the input voltage level and also to recycle leakage inductance energy to the input voltage and capacitor. Besides, the interleaved control is implemented to reduce the output current ripple. In addition, the voltage on the primary windings is reduced to the half of the input voltage and thus reducing the turns ratio of transformers to improve efficiency. The operating principle and the steady state analysis of the proposed converter are discussed in detail. Finally, an experimental prototype is implemented with 400V input voltage, 24V/300W output to verify the feasibility of the proposed converter. The experimental results reveals that the highest efficiency of the proposed converter is 94.42%, the full load efficiency is 92.7%, and the 10% load efficiency is 92.61%.", "title": "" }, { "docid": "neg:1840242_14", "text": "This work contributes several new elements to the quest for a biologically plausible implementation of backprop in brains. We introduce a very general and abstract framework for machine learning, in which the quantities of interest are defined implicitly through an energy function. In this framework, only one kind of neural computation is involved both for the first phase (when the prediction is made) and the second phase (after the target is revealed), like the contrastive Hebbian learning algorithm in the continuous Hopfield model for example. Contrary to automatic differentiation in computational graphs (i.e. standard backprop), there is no need for special computation in the second phase of our framework. One advantage of our framework over contrastive Hebbian learning is that the second phase corresponds to only nudging the first-phase fixed point towards a configuration that reduces prediction error. In the case of a multi-layer supervised neural network, the output units are slightly nudged towards their target, and the perturbation introduced at the output layer propagates backward in the network. The signal ’back-propagated’ during this second phase actually contains information about the error derivatives, which we use to implement a learning rule proved to perform gradient descent with respect to an objective function.", "title": "" }, { "docid": "neg:1840242_15", "text": "The Human Papillomavirus (HPV) E6 protein is one of three oncoproteins encoded by the virus. It has long been recognized as a potent oncogene and is intimately associated with the events that result in the malignant conversion of virally infected cells. In order to understand the mechanisms by which E6 contributes to the development of human malignancy many laboratories have focused their attention on identifying the cellular proteins with which E6 interacts. In this review we discuss these interactions in the light of their respective contributions to the malignant progression of HPV transformed cells.", "title": "" }, { "docid": "neg:1840242_16", "text": "The massive acceleration of the nitrogen cycle as a result of the production and industrial use of artificial nitrogen fertilizers worldwide has enabled humankind to greatly increase food production, but it has also led to a host of environmental problems, ranging from eutrophication of terrestrial and aquatic systems to global acidification. The findings of many national and international research programmes investigating the manifold consequences of human alteration of the nitrogen cycle have led to a much improved understanding of the scope of the anthropogenic nitrogen problem and possible strategies for managing it. Considerably less emphasis has been placed on the study of the interactions of nitrogen with the other major biogeochemical cycles, particularly that of carbon, and how these cycles interact with the climate system in the presence of the ever-increasing human intervention in the Earth system. With the release of carbon dioxide (CO2) from the burning of fossil fuels pushing the climate system into uncharted territory, which has major consequences for the functioning of the global carbon cycle, and with nitrogen having a crucial role in controlling key aspects of this cycle, questions about the nature and importance of nitrogen–carbon–climate interactions are becoming increasingly pressing. The central question is how the availability of nitrogen will affect the capacity of Earth’s biosphere to continue absorbing carbon from the atmosphere (see page 289), and hence continue to help in mitigating climate change. Addressing this and other open issues with regard to nitrogen–carbon–climate interactions requires an Earth-system perspective that investigates the dynamics of the nitrogen cycle in the context of a changing carbon cycle, a changing climate and changes in human actions.", "title": "" }, { "docid": "neg:1840242_17", "text": "Holistic driving scene understanding is a critical step toward intelligent transportation systems. It involves different levels of analysis, interpretation, reasoning and decision making. In this paper, we propose a 3D dynamic scene analysis framework as the first step toward driving scene understanding. Specifically, given a sequence of synchronized 2D and 3D sensory data, the framework systematically integrates different perception modules to obtain 3D position, orientation, velocity and category of traffic participants and the ego car in a reconstructed 3D semantically labeled traffic scene. We implement this framework and demonstrate the effectiveness in challenging urban driving scenarios. The proposed framework builds a foundation for higher level driving scene understanding problems such as intention and motion prediction of surrounding entities, ego motion planning, and decision making.", "title": "" }, { "docid": "neg:1840242_18", "text": "Protein catalysis requires the atomic-level orchestration of side chains, substrates and cofactors, and yet the ability to design a small-molecule-binding protein entirely from first principles with a precisely predetermined structure has not been demonstrated. Here we report the design of a novel protein, PS1, that binds a highly electron-deficient non-natural porphyrin at temperatures up to 100 °C. The high-resolution structure of holo-PS1 is in sub-Å agreement with the design. The structure of apo-PS1 retains the remote core packing of the holoprotein, with a flexible binding region that is predisposed to ligand binding with the desired geometry. Our results illustrate the unification of core packing and binding-site definition as a central principle of ligand-binding protein design.", "title": "" }, { "docid": "neg:1840242_19", "text": "We consider the problem of grasping novel objects in cluttered environments. If a full 3-d model of the scene were available, one could use the model to estimate the stability and robustness of different grasps (formalized as form/force-closure, etc); in practice, however, a robot facing a novel object will usually be able to perceive only the front (visible) faces of the object. In this paper, we propose an approach to grasping that estimates the stability of different grasps, given only noisy estimates of the shape of visible portions of an object, such as that obtained from a depth sensor. By combining this with a kinematic description of a robot arm and hand, our algorithm is able to compute a specific positioning of the robot’s fingers so as to grasp an object. We test our algorithm on two robots (with very different arms/manipulators, including one with a multi-fingered hand). We report results on the task of grasping objects of significantly different shapes and appearances than ones in the training set, both in highly cluttered and in uncluttered environments. We also apply our algorithm to the problem of unloading items from a dishwasher. Introduction We consider the problem of grasping novel objects, in the presence of significant amounts of clutter. A key challenge in this setting is that a full 3-d model of the scene is typically not available. Instead, a robot’s depth sensors can usually estimate only the shape of the visible portions of the scene. In this paper, we propose an algorithm that, given such partial models of the scene, selects a grasp—that is, a configuration of the robot’s arm and fingers—to try to pick up an object. If a full 3-d model (including the occluded portions of a scene) were available, then methods such as formand forceclosure (Mason and Salisbury 1985; Bicchi and Kumar 2000; Pollard 2004) and other grasp quality metrics (Pelossof et al. 2004; Hsiao, Kaelbling, and Lozano-Perez 2007; Ciocarlie, Goldfeder, and Allen 2007) can be used to try to find a good grasp. However, given only the point cloud returned by stereo vision or other depth sensors, a straightforward application of these ideas is impossible, since we do not have a model of the occluded portions of the scene. Copyright c © 2008, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Image of an environment (left) and the 3-d pointcloud (right) returned by the Swissranger depth sensor. In detail, we will consider a robot that uses a camera, together with a depth sensor, to perceive a scene. The depth sensor returns a “point cloud,” corresponding to 3-d locations that it has found on the front unoccluded surfaces of the objects. (See Fig. 1.) Such point clouds are typically noisy (because of small errors in the depth estimates); but more importantly, they are also incomplete. 1 This work builds on Saxena et al. (2006a; 2006b; 2007; 2008) which applied supervised learning to identify visual properties that indicate good grasps, given a 2-d image of the scene. However, their algorithm only chose a 3-d “grasp point”—that is, the 3-d position (and 3-d orientation; Saxena et al. 2007) of the center of the end-effector. Thus, it did not generalize well to more complex arms and hands, such as to multi-fingered hands where one has to not only choose the 3d position (and orientation) of the hand, but also address the high dof problem of choosing the positions of all the fingers. Our approach begins by computing a number of features of grasp quality, using both 2-d image and the 3-d point cloud features. For example, the 3-d data is used to compute a number of grasp quality metrics, such as the degree to which the fingers are exerting forces normal to the surfaces of the object, and the degree to which they enclose the object. Using such features, we then apply a supervised learning algorithm to estimate the degree to which different configurations of the full arm and fingers reflect good grasps. We test our algorithm on two robots, on a variety of objects of shapes very different from ones in the training set, including a ski boot, a coil of wire, a game controller, and Forexample, standard stereo vision fails to return depth values for textureless portions of the object, thus its point clouds are typically very sparse. Further, the Swissranger gives few points only because of its low spatial resolution of 144 × 176. Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008)", "title": "" } ]
1840243
DCN+: Mixed Objective and Deep Residual Coattention for Question Answering
[ { "docid": "pos:1840243_0", "text": "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling “where to look” or visual attention, it is equally important to model “what words to listen to” or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.1. 1 Introduction Visual Question Answering (VQA) [2, 7, 16, 17, 29] has emerged as a prominent multi-discipline research problem in both academia and industry. To correctly answer visual questions about an image, the machine needs to understand both the image and question. Recently, visual attention based models [20, 23–25] have been explored for VQA, where the attention mechanism typically produces a spatial map highlighting image regions relevant to answering the question. So far, all attention models for VQA in literature have focused on the problem of identifying “where to look” or visual attention. In this paper, we argue that the problem of identifying “which words to listen to” or question attention is equally important. Consider the questions “how many horses are in this image?” and “how many horses can you see in this image?\". They have the same meaning, essentially captured by the first three words. A machine that attends to the first three words would arguably be more robust to linguistic variations irrelevant to the meaning and answer of the question. Motivated by this observation, in addition to reasoning about visual attention, we also address the problem of question attention. Specifically, we present a novel multi-modal attention model for VQA with the following two unique features: Co-Attention: We propose a novel mechanism that jointly reasons about visual attention and question attention, which we refer to as co-attention. Unlike previous works, which only focus on visual attention, our model has a natural symmetry between the image and question, in the sense that the image representation is used to guide the question attention and the question representation(s) are used to guide image attention. Question Hierarchy: We build a hierarchical architecture that co-attends to the image and question at three levels: (a) word level, (b) phrase level and (c) question level. At the word level, we embed the words to a vector space through an embedding matrix. At the phrase level, 1-dimensional convolution neural networks are used to capture the information contained in unigrams, bigrams and trigrams. The source code can be downloaded from https://github.com/jiasenlu/HieCoAttenVQA 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. ar X iv :1 60 6. 00 06 1v 3 [ cs .C V ] 2 6 O ct 2 01 6 Ques%on:\t\r What\t\r color\t\r on\t\r the stop\t\r light\t\r is\t\r lit\t\r up\t\r \t\r ? ...\t\r ... color\t\r stop\t\r light\t\r lit co-­‐a7en%on color\t\r ...\t\r stop\t\r \t\r light\t\r \t\r ... What color\t\r ... the stop light light\t\r \t\r ... What color What\t\r color\t\r on\t\r the\t\r stop\t\r light\t\r is\t\r lit\t\r up ...\t\r ... the\t\r stop\t\r light ...\t\r ... stop Image Answer:\t\r green Figure 1: Flowchart of our proposed hierarchical co-attention model. Given a question, we extract its word level, phrase level and question level embeddings. At each level, we apply co-attention on both the image and question. The final answer prediction is based on all the co-attended image and question features. Specifically, we convolve word representations with temporal filters of varying support, and then combine the various n-gram responses by pooling them into a single phrase level representation. At the question level, we use recurrent neural networks to encode the entire question. For each level of the question representation in this hierarchy, we construct joint question and image co-attention maps, which are then combined recursively to ultimately predict a distribution over the answers. Overall, the main contributions of our work are: • We propose a novel co-attention mechanism for VQA that jointly performs question-guided visual attention and image-guided question attention. We explore this mechanism with two strategies, parallel and alternating co-attention, which are described in Sec. 3.3; • We propose a hierarchical architecture to represent the question, and consequently construct image-question co-attention maps at 3 different levels: word level, phrase level and question level. These co-attended features are then recursively combined from word level to question level for the final answer prediction; • At the phrase level, we propose a novel convolution-pooling strategy to adaptively select the phrase sizes whose representations are passed to the question level representation; • Finally, we evaluate our proposed model on two large datasets, VQA [2] and COCO-QA [17]. We also perform ablation studies to quantify the roles of different components in our model.", "title": "" } ]
[ { "docid": "neg:1840243_0", "text": "A network is said to show assortative mixing if the nodes in the network that have many connections tend to be connected to other nodes with many connections. Here we measure mixing patterns in a variety of networks and find that social networks are mostly assortatively mixed, but that technological and biological networks tend to be disassortative. We propose a model of an assortatively mixed network, which we study both analytically and numerically. Within this model we find that networks percolate more easily if they are assortative and that they are also more robust to vertex removal.", "title": "" }, { "docid": "neg:1840243_1", "text": "Sophorolipids are biosurfactants belonging to the class of the glycolipid, produced mainly by the osmophilic yeast Candida bombicola. Structurally they are composed by a disaccharide sophorose (2’-O-β-D-glucopyranosyl-β-D-glycopyranose) which is linked β -glycosidically to a long fatty acid chain with generally 16 to 18 atoms of carbon with one or more unsaturation. They are produced as a complex mix containing up to 40 molecules and associated isomers, depending on the species which produces it, the substrate used and the culture conditions. They present properties which are very similar or superior to the synthetic surfactants and other biosurfactants with the advantage of presenting low toxicity, higher biodegradability, better environmental compatibility, high selectivity and specific activity in a broad range of temperature, pH and salinity conditions. Its biological activities are directly related with its chemical structure. Sophorolipids possess a great potential for application in areas such as: food; bioremediation; cosmetics; pharmaceutical; biomedicine; nanotechnology and enhanced oil recovery.", "title": "" }, { "docid": "neg:1840243_2", "text": "A novel soft-robotic gripper design is presented, with three soft bending fingers and one passively adaptive palm. Each soft finger comprises two ellipse-profiled pneumatic chambers. Combined with the adaptive palm and the surface patterned feature, the soft gripper could achieve 40-N grasping force in practice, 10 times the self-weight, at a very low actuation pressure below 100 kPa. With novel soft finger design, the gripper could pick up small objects, as well as conform to large convex-shape objects with reliable contact. The fabrication process was presented in detail, involving commercial-grade three-dimensional printing and molding of silicone rubber. The fabricated actuators and gripper were tested on a dedicated platform, showing the gripper could reliably grasp objects of various shapes and sizes, even with external disturbances.", "title": "" }, { "docid": "neg:1840243_3", "text": "Unsupervised machine translation—i.e., not assuming any cross-lingual supervision signal, whether a dictionary, translations, or comparable corpora—seems impossible, but nevertheless, Lample et al. (2018a) recently proposed a fully unsupervised machine translation (MT) model. The model relies heavily on an adversarial, unsupervised alignment of word embedding spaces for bilingual dictionary induction (Conneau et al., 2018), which we examine here. Our results identify the limitations of current unsupervised MT: unsupervised bilingual dictionary induction performs much worse on morphologically rich languages that are not dependent marking, when monolingual corpora from different domains or different embedding algorithms are used. We show that a simple trick, exploiting a weak supervision signal from identical words, enables more robust induction, and establish a near-perfect correlation between unsupervised bilingual dictionary induction performance and a previously unexplored graph similarity metric.", "title": "" }, { "docid": "neg:1840243_4", "text": "Autonomous vehicles platooning has received considerable attention in recent years, due to its potential to significantly benefit road transportation, improving traffic efficiency, enhancing road safety and reducing fuel consumption. The Vehicular ad hoc Networks and the de facto vehicular networking standard IEEE 802.11p communication protocol are key tools for the deployment of platooning applications, since the cooperation among vehicles is based on a reliable communication structure. However, vehicular networks can suffer different security threats. Indeed, in collaborative driving applications, the sudden appearance of a malicious attack can mainly compromise: (i) the correctness of data traffic flow on the vehicular network by sending malicious messages that alter the platoon formation and its coordinated motion; (ii) the safety of platooning application by altering vehicular network communication capability. In view of the fact that cyber attacks can lead to dangerous implications for the security of autonomous driving systems, it is fundamental to consider their effects on the behavior of the interconnected vehicles, and to try to limit them from the control design stage. To this aim, in this work we focus on some relevant types of malicious threats that affect the platoon safety, i.e. application layer attacks (Spoofing and Message Falsification) and network layer attacks (Denial of Service and Burst Transmission), and we propose a novel collaborative control strategy for enhancing the protection level of autonomous platoons. The control protocol is designed and validated in both analytically and experimental way, for the appraised malicious attack scenarios and for different communication topology structures. The effectiveness of the proposed strategy is shown by using PLEXE, a state of the art inter-vehicular communications and mobility simulator that includes basic building blocks for platooning. A detailed experimental analysis discloses the robustness of the proposed approach and its capabilities in reacting to the malicious attack effects.", "title": "" }, { "docid": "neg:1840243_5", "text": "General human action recognition requires understanding of various visual cues. In this paper, we propose a network architecture that computes and integrates the most important visual cues for action recognition: pose, motion, and the raw images. For the integration, we introduce a Markov chain model which adds cues successively. The resulting approach is efficient and applicable to action classification as well as to spatial and temporal action localization. The two contributions clearly improve the performance over respective baselines. The overall approach achieves state-of-the-art action classification performance on HMDB51, J-HMDB and NTU RGB+D datasets. Moreover, it yields state-of-the-art spatio-temporal action localization results on UCF101 and J-HMDB.", "title": "" }, { "docid": "neg:1840243_6", "text": "Optical Character Recognition (OCR) systems often generate errors for images with noise or with low scanning resolution. In this paper, a novel approach that can be used to improve and restore the quality of any clean lower resolution images for easy recognition by OCR process. The method relies on the production of four copies of the original image so that each picture undergoes different restoration processes. These four copies of the images are then passed to a single OCR engine in parallel. In addition to that, the method does not need any traditional alignment between the four resulting texts, which is time consuming and needs complex calculation. It implements a new procedure to choose the best among them and can be applied without prior training on errors. The experimental results show improvement in word error rate for low resolution images by more than 67%.", "title": "" }, { "docid": "neg:1840243_7", "text": "Twitter provides search services to help people find new users to follow by recommending popular users or their friends' friends. However, these services do not offer the most relevant users to follow for a user. Furthermore, Twitter does not provide yet the search services to find the most interesting tweet messages for a user either. In this paper, we propose TWITOBI, a recommendation system for Twitter using probabilistic modeling for collaborative filtering which can recommend top-K users to follow and top-K tweets to read for a user. Our novel probabilistic model utilizes not only tweet messages but also the relationships between users. We develop an estimation algorithm for learning our model parameters and present its parallelized algorithm using MapReduce to handle large data. Our performance study with real-life data sets confirms the effectiveness and scalability of our algorithms.", "title": "" }, { "docid": "neg:1840243_8", "text": "BACKGROUND\nSystematic reviews are most helpful if they are up-to-date. We did a systematic review of strategies and methods describing when and how to update systematic reviews.\n\n\nOBJECTIVES\nTo identify, describe and assess strategies and methods addressing: 1) when to update systematic reviews and 2) how to update systematic reviews.\n\n\nSEARCH STRATEGY\nWe searched MEDLINE (1966 to December 2005), PsycINFO, the Cochrane Methodology Register (Issue 1, 2006), and hand searched the 2005 Cochrane Colloquium proceedings.\n\n\nSELECTION CRITERIA\nWe included methodology reports, updated systematic reviews, commentaries, editorials, or other short reports describing the development, use, or comparison of strategies and methods for determining the need for updating or updating systematic reviews in healthcare.\n\n\nDATA COLLECTION AND ANALYSIS\nWe abstracted information from each included report using a 15-item questionnaire. The strategies and methods for updating systematic reviews were assessed and compared descriptively with respect to their usefulness, comprehensiveness, advantages, and disadvantages.\n\n\nMAIN RESULTS\nFour updating strategies, one technique, and two statistical methods were identified. Three strategies addressed steps for updating and one strategy presented a model for assessing the need to update. One technique discussed the use of the \"entry date\" field in bibliographic searching. Statistical methods were cumulative meta-analysis and predicting when meta-analyses are outdated.\n\n\nAUTHORS' CONCLUSIONS\nLittle research has been conducted on when and how to update systematic reviews and the feasibility and efficiency of the identified approaches is uncertain. These shortcomings should be addressed in future research.", "title": "" }, { "docid": "neg:1840243_9", "text": "Social media have become an established feature of the dynamic information space that emerges during crisis events. Both emergency responders and the public use these platforms to search for, disseminate, challenge, and make sense of information during crises. In these situations rumors also proliferate, but just how fast such information can spread is an open question. We address this gap, modeling the speed of information transmission to compare retransmission times across content and context features. We specifically contrast rumor-affirming messages with rumor-correcting messages on Twitter during a notable hostage crisis to reveal differences in transmission speed. Our work has important implications for the growing field of crisis informatics.", "title": "" }, { "docid": "neg:1840243_10", "text": "The security of computer systems fundamentally relies on memory isolation, e.g., kernel address ranges are marked as non-accessible and are protected from user access. In this paper, we present Meltdown. Meltdown exploits side effects of out-of-order execution on modern processors to read arbitrary kernel-memory locations including personal data and passwords. Out-of-order execution is an indispensable performance feature and present in a wide range of modern processors. The attack is independent of the operating system, and it does not rely on any software vulnerabilities. Meltdown breaks all security guarantees provided by address space isolation as well as paravirtualized environments and, thus, every security mechanism building upon this foundation. On affected systems, Meltdown enables an adversary to read memory of other processes or virtual machines in the cloud without any permissions or privileges, affecting millions of customers and virtually every user of a personal computer. We show that the KAISER defense mechanism for KASLR has the important (but inadvertent) side effect of impeding Meltdown. We stress that KAISER must be deployed immediately to prevent largescale exploitation of this severe information leakage.", "title": "" }, { "docid": "neg:1840243_11", "text": "Social media such as those residing in the popular photo sharing websites is attracting increasing attention in recent years. As a type of user-generated data, wisdom of the crowd is embedded inside such social media. In particular, millions of users upload to Flickr their photos, many associated with temporal and geographical information. In this paper, we investigate how to rank the trajectory patterns mined from the uploaded photos with geotags and timestamps. The main objective is to reveal the collective wisdom recorded in the seemingly isolated photos and the individual travel sequences reflected by the geo-tagged photos. Instead of focusing on mining frequent trajectory patterns from geo-tagged social media, we put more effort into ranking the mined trajectory patterns and diversifying the ranking results. Through leveraging the relationships among users, locations and trajectories, we rank the trajectory patterns. We then use an exemplar-based algorithm to diversify the results in order to discover the representative trajectory patterns. We have evaluated the proposed framework on 12 different cities using a Flickr dataset and demonstrated its effectiveness.", "title": "" }, { "docid": "neg:1840243_12", "text": "The effect of the steel slag aggregate aging on mechanical properties of the high performance concrete is analysed in the paper. The effect of different aging periods of steel slag aggregate on mechanical properties of high performance concrete is studied. It was observed that properties of this concrete are affected by the steel slag aggregate aging process. The compressive strength increases with an increase in the aging period of steel slag aggregate. The flexural strength, Young’s modulus, and impact strength of concrete, increase at the rate similar to that of the compressive strength. The workability and the abrasion loss of concrete decrease with an increase of the steel slag aggregate aging period.", "title": "" }, { "docid": "neg:1840243_13", "text": "Due to poor efficiencies of Incandescent Lamps (ILs), Fluorescent Lamps (FLs) and Compact Fluorescent Lamps (CFLs) are increasingly used in residential and commercial applications. This proliferation of FLs and CFLs increases the harmonics level in distribution systems that could affect power systems and end users. In order to quantify the harmonics produced by FLs and CFLs precisely, accurate modelling of these loads are required. Matlab Simulink is used to model and simulate the full models of FLs and CFLs to give close results to the experimental measurements. Moreover, a Constant Load Power (CLP) model is also modelled and its results are compared with the full models of FLs and CFLs. This CLP model is much faster to simulate and easier to model than the full model. Such models help engineers and researchers to evaluate the harmonics exist within households and commercial buildings.", "title": "" }, { "docid": "neg:1840243_14", "text": "Knowledge about entities is essential for natural language understanding. This knowledge includes several facts about entities such as their names, properties, relations and types. This data is usually stored in large scale structures called knowledge bases (KB) and therefore building and maintaining KBs is very important. Examples of such KBs are Wikipedia, Freebase and Google knowledge graph. Incompleteness is unfortunately a reality for every KB, because the world is changing – new entities are emerging, and existing entities are getting new properties. Therefore, we always need to update KBs. To do so, we propose an information extraction method that processes large raw corpora in order to gather knowledge about entities. We focus on extraction of entity types and address the task of fine-grained entity typing: given a KB and a large corpus of text with mentions of entities in the KB, find all fine-grained types of the entities. For example given a large corpus and the entity “Barack Obama” we need to find all his types including PERSON, POLITICIAN, and AUTHOR. Artificial neural networks (NNs) have shown promising results in different machine learning problems. Distributed representation (embedding) is an effective way of representing data for NNs. In this work, we introduce two models for fine-grained entity typing using NNs with distributed representations of language units: (i) A global model that predicts types of an entity based on its global representation learned from the entity’s name and contexts. (ii) A context model that predicts types of an entity based on its context-level predictions. Each of the two proposed models has some specific properties. For the global model, learning high quality entity representations is crucial because it is the only source used for the predictions. Therefore, we introduce representations using name and contexts of entities on three levels of entity, word, and character. We show each has complementary information and a multi-level representation is the best. For the context model, we need to use distant supervision since the contextlevel labels are not available for entities. Distant supervised labels are noisy and this harms the performance of models. Therefore, we introduce and apply new algorithms for noise mitigation using multi-instance learning.", "title": "" }, { "docid": "neg:1840243_15", "text": "Creating mechanical automata that can walk in stable and pleasing manners is a challenging task that requires both skill and expertise. We propose to use computational design to offset the technical difficulties of this process. A simple drag-and-drop interface allows casual users to create personalized walking toys from a library of pre-defined template mechanisms. Provided with this input, our method leverages physical simulation and evolutionary optimization to refine the mechanical designs such that the resulting toys are able to walk. The optimization process is guided by an intuitive set of objectives that measure the quality of the walking motions. We demonstrate our approach on a set of simulated mechanical toys with different numbers of legs and various distinct gaits. Two fabricated prototypes showcase the feasibility of our designs.", "title": "" }, { "docid": "neg:1840243_16", "text": "A 70 mm-open-ended coaxial line probe was developed to perform measurements of the dielectric properties of large concrete samples. The complex permittivity was measured in the frequency range 50 MHz – 1.5 GHz during the hardening process of the concrete. As expected, strong dependence of water content was observed.", "title": "" }, { "docid": "neg:1840243_17", "text": "The authors present a new approach to culture and cognition, which focuses on the dynamics through which specific pieces of cultural knowledge (implicit theories) become operative in guiding the construction of meaning from a stimulus. Whether a construct comes to the fore in a perceiver's mind depends on the extent to which the construct is highly accessible (because of recent exposure). In a series of cognitive priming experiments, the authors simulated the experience of bicultural individuals (people who have internalized two cultures) of switching between different cultural frames in response to culturally laden symbols. The authors discuss how this dynamic, constructivist approach illuminates (a) when cultural constructs are potent drivers of behavior and (b) how bicultural individuals may control the cognitive effects of culture.", "title": "" }, { "docid": "neg:1840243_18", "text": "One of the technologies that has been showing possibilities of application in educational environments is the Augmented Reality (AR), in addition to its application to other fields such as tourism, advertising, video games, among others. The present article shows the results of an experiment carried out at the National University of Colombia, with the design and construction of augmented learning objects for the seventh and eighth grades of secondary education, which were tested and evaluated by students of a school in the department of Caldas. The study confirms the potential of this technology to support educational processes represented in the creation of digital resources for mobile devices. The development of learning objects in AR for mobile devices can support teachers in the integration of information and communication technologies (ICT) in the teaching-learning processes.", "title": "" } ]
1840244
Security Improvement for Management Frames in IEEE 802.11 Wireless Networks
[ { "docid": "pos:1840244_0", "text": "The convenience of 802.11-based wireless access networks has led to widespread deployment in the consumer, industrial and military sectors. However, this use is predicated on an implicit assumption of confidentiality and availability. While the secu­ rity flaws in 802.11’s basic confidentially mechanisms have been widely publicized, the threats to network availability are far less widely appreciated. In fact, it has been suggested that 802.11 is highly suscepti­ ble to malicious denial-of-service (DoS) attacks tar­ geting its management and media access protocols. This paper provides an experimental analysis of such 802.11-specific attacks – their practicality, their ef­ ficacy and potential low-overhead implementation changes to mitigate the underlying vulnerabilities.", "title": "" } ]
[ { "docid": "neg:1840244_0", "text": "We consider the problem of automatically estimating the 3D pose of humans from images, taken from multiple calibrated views. We show that it is possible and tractable to extend the pictorial structures framework, popular for 2D pose estimation, to 3D. We discuss how to use this framework to impose view, skeleton, joint angle and intersection constraints in 3D. The 3D pictorial structures are evaluated on multiple view data from a professional football game. The evaluation is focused on computational tractability, but we also demonstrate how a simple 2D part detector can be plugged into the framework.", "title": "" }, { "docid": "neg:1840244_1", "text": "Cheap and versatile cameras make it possible to easily and quickly capture a wide variety of documents. However, low resolution cameras present a challenge to OCR because it is virtually impossible to do character segmentation independently from recognition. In this paper we solve these problems simultaneously by applying methods borrowed from cursive handwriting recognition. To achieve maximum robustness, we use a machine learning approach based on a convolutional neural network. When our system is combined with a language model using dynamic programming, the overall performance is in the vicinity of 80-95% word accuracy on pages captured with a 1024/spl times/768 webcam and 10-point text.", "title": "" }, { "docid": "neg:1840244_2", "text": "This paper presents a non-photorealistic rendering technique that automatically generates a line drawing from a photograph. We aim at extracting a set of coherent, smooth, and stylistic lines that effectively capture and convey important shapes in the image. We first develop a novel method for constructing a smooth direction field that preserves the flow of the salient image features. We then introduce the notion of flow-guided anisotropic filtering for detecting highly coherent lines while suppressing noise. Our method is simple and easy to implement. A variety of experimental results are presented to show the effectiveness of our method in producing self-contained, high-quality line illustrations.", "title": "" }, { "docid": "neg:1840244_3", "text": "We consider the use of multi-agent systems to control network routing. Conventional approaches to this task are based on Ideal Shortest Path routing Algorithm (ISPA), under which at each moment each agent in the network sends all of its traffic down the path that will incur the lowest cost to that traffic. We demonstrate in computer experiments that due to the side-effects of one agent’s actions on another agent’s traffic, use of ISPA’s can result in large global cost. In particular, in a simulation of Braess’ paradox we see that adding new capacity to a network with ISPA agents can decrease overall throughput. The theory of COllective INtelligence (COIN) design concerns precisely the issue of avoiding such side-effects. We use that theory to derive an idealized routing algorithm and show that a practical machine-learning-based version of this algorithm, in which costs are only imprecisely estimated substantially outperforms the ISPA, despite having access to less information than does the ISPA. In particular, this practical COIN algorithm avoids Braess’ paradox.", "title": "" }, { "docid": "neg:1840244_4", "text": "BACKGROUND\nAn accurate risk stratification tool is critical in identifying patients who are at high risk of frequent hospital readmissions. While 30-day hospital readmissions have been widely studied, there is increasing interest in identifying potential high-cost users or frequent hospital admitters. In this study, we aimed to derive and validate a risk stratification tool to predict frequent hospital admitters.\n\n\nMETHODS\nWe conducted a retrospective cohort study using the readily available clinical and administrative data from the electronic health records of a tertiary hospital in Singapore. The primary outcome was chosen as three or more inpatient readmissions within 12 months of index discharge. We used univariable and multivariable logistic regression models to build a frequent hospital admission risk score (FAM-FACE-SG) by incorporating demographics, indicators of socioeconomic status, prior healthcare utilization, markers of acute illness burden and markers of chronic illness burden. We further validated the risk score on a separate dataset and compared its performance with the LACE index using the receiver operating characteristic analysis.\n\n\nRESULTS\nOur study included 25,244 patients, with 70% randomly selected patients for risk score derivation and the remaining 30% for validation. Overall, 4,322 patients (17.1%) met the outcome. The final FAM-FACE-SG score consisted of nine components: Furosemide (Intravenous 40 mg and above during index admission); Admissions in past one year; Medifund (Required financial assistance); Frequent emergency department (ED) use (≥3 ED visits in 6 month before index admission); Anti-depressants in past one year; Charlson comorbidity index; End Stage Renal Failure on Dialysis; Subsidized ward stay; and Geriatric patient or not. In the experiments, the FAM-FACE-SG score had good discriminative ability with an area under the curve (AUC) of 0.839 (95% confidence interval [CI]: 0.825-0.853) for risk prediction of frequent hospital admission. In comparison, the LACE index only achieved an AUC of 0.761 (0.745-0.777).\n\n\nCONCLUSIONS\nThe FAM-FACE-SG score shows strong potential for implementation to provide near real-time prediction of frequent admissions. It may serve as the first step to identify high risk patients to receive resource intensive interventions.", "title": "" }, { "docid": "neg:1840244_5", "text": "Arithmetic coding is a data compression technique that encodes data (the data string) by creating a code string which represents a fractional value on the number line between 0 and 1. The coding algorithm is symbolwise recursive; i.e., it operates upon and encodes (decodes) one data symbol per iteration or recursion. On each recursion, the algorithm successively partitions an interval of the number line between 0 and I , and retains one of the partitions as the new interval. Thus, the algorithm successively deals with smaller intervals, and the code string, viewed as a magnitude, lies in each of the nested intervals. The data string is recovered by using magnitude comparisons on the code string to recreate how the encoder must have successively partitioned and retained each nested subinterval. Arithmetic coding differs considerably from the more familiar compression coding techniques, such as prefix (Huffman) codes. Also, it should not be confused with error control coding, whose object is to detect and correct errors in computer operations. This paper presents the key notions of arithmetic compression coding by means of simple examples.", "title": "" }, { "docid": "neg:1840244_6", "text": "Combining information extraction systems yields significantly higher quality resources than each system in isolation. In this paper, we generalize such a mixing of sources and features in a framework called Ensemble Semantics. We show very large gains in entity extraction by combining state-of-the-art distributional and patternbased systems with a large set of features from a webcrawl, query logs, and Wikipedia. Experimental results on a webscale extraction of actors, athletes and musicians show significantly higher mean average precision scores (29% gain) compared with the current state of the art.", "title": "" }, { "docid": "neg:1840244_7", "text": "Synesthesia is a conscious experience of systematically induced sensory attributes that are not experienced by most people under comparable conditions. Recent findings from cognitive psychology, functional brain imaging and electrophysiology have shed considerable light on the nature of synesthesia and its neurocognitive underpinnings. These cognitive and physiological findings are discussed with respect to a neuroanatomical framework comprising hierarchically organized cortical sensory pathways. We advance a neurobiological theory of synesthesia that fits within this neuroanatomical framework.", "title": "" }, { "docid": "neg:1840244_8", "text": "Our main objective in this paper is to measure the value of customers acquired from Google search advertising accounting for two factors that have been overlooked in the conventional method widely adopted in the industry: the spillover effect of search advertising on customer acquisition and sales in offline channels and the lifetime value of acquired customers. By merging web traffic and sales data from a small-sized U.S. firm, we create an individual customer level panel which tracks all repeated purchases, both online and offline, and whether or not these purchases were referred from Google search advertising. To estimate the customer lifetime value, we apply the methodology in the CRM literature by developing an integrated model of customer lifetime, transaction rate, and gross profit margin, allowing for individual heterogeneity and a full correlation of the three processes. Results show that customers acquired through Google search advertising in our data have a higher transaction rate than customers acquired from other channels. After accounting for future purchases and spillover to offline channels, the calculated value of new customers is much higher than that when we use the conventional method. The approach used in our study provides a practical framework for firms to evaluate the long term profit impact of their search advertising investment in a multichannel setting.", "title": "" }, { "docid": "neg:1840244_9", "text": "The rapid pace of technological advances in recentyears has enabled a significant evolution and deployment ofWireless Sensor Networks (WSN). These networks are a keyplayer in the so-called Internet of Things, generating exponentiallyincreasing amounts of data. Nonetheless, there are veryfew documented works that tackle the challenges related with thecollection, manipulation and exploitation of the data generated bythese networks. This paper presents a proposal for integrating BigData tools (in rest and in motion) for the gathering, storage andanalysis of data generated by a WSN that monitors air pollutionlevels in a city. We provide a proof of concept that combinesHadoop and Storm for data processing, storage and analysis,and Arduino-based kits for constructing our sensor prototypes.", "title": "" }, { "docid": "neg:1840244_10", "text": "A self-rating scale was developed to measure the severity of fatigue. Two-hundred and seventy-four new registrations on a general practice list completed a 14-item fatigue scale. In addition, 100 consecutive attenders to a general practice completed the fatigue scale and the fatigue item of the revised Clinical Interview Schedule (CIS-R). These were compared by the application of Relative Operating Characteristic (ROC) analysis. Tests of internal consistency and principal components analyses were performed on both sets of data. The scale was found to be both reliable and valid. There was a high degree of internal consistency, and the principal components analysis supported the notion of a two-factor solution (physical and mental fatigue). The validation coefficients for the fatigue scale, using an arbitrary cut off score of 3/4 and the item on the CIS-R were: sensitivity 75.5 and specificity 74.5.", "title": "" }, { "docid": "neg:1840244_11", "text": "In machine learning, data augmentation is the process of creating synthetic examples in order to augment a dataset used to learn a model. One motivation for data augmentation is to reduce the variance of a classifier, thereby reducing error. In this paper, we propose new data augmentation techniques specifically designed for time series classification, where the space in which they are embedded is induced by Dynamic Time Warping (DTW). The main idea of our approach is to average a set of time series and use the average time series as a new synthetic example. The proposed methods rely on an extension of DTW Barycentric Averaging (DBA), the averaging technique that is specifically developed for DTW. In this paper, we extend DBA to be able to calculate a weighted average of time series under DTW. In this case, instead of each time series contributing equally to the final average, some can contribute more than others. This extension allows us to generate an infinite number of new examples from any set of given time series. To this end, we propose three methods that choose the weights associated to the time series of the dataset. We carry out experiments on the 85 datasets of the UCR archive and demonstrate that our method is particularly useful when the number of available examples is limited (e.g. 2 to 6 examples per class) using a 1-NN DTW classifier. Furthermore, we show that augmenting full datasets is beneficial in most cases, as we observed an increase of accuracy on 56 datasets, no effect on 7 and a slight decrease on only 22.", "title": "" }, { "docid": "neg:1840244_12", "text": "Mind-body interventions are beneficial in stress-related mental and physical disorders. Current research is finding associations between emotional disorders and vagal tone as indicated by heart rate variability. A neurophysiologic model of yogic breathing proposes to integrate research on yoga with polyvagal theory, vagal stimulation, hyperventilation, and clinical observations. Yogic breathing is a unique method for balancing the autonomic nervous system and influencing psychologic and stress-related disorders. Many studies demonstrate effects of yogic breathing on brain function and physiologic parameters, but the mechanisms have not been clarified. Sudarshan Kriya yoga (SKY), a sequence of specific breathing techniques (ujjayi, bhastrika, and Sudarshan Kriya) can alleviate anxiety, depression, everyday stress, post-traumatic stress, and stress-related medical illnesses. Mechanisms contributing to a state of calm alertness include increased parasympathetic drive, calming of stress response systems, neuroendocrine release of hormones, and thalamic generators. This model has heuristic value, research implications, and clinical applications.", "title": "" }, { "docid": "neg:1840244_13", "text": "Gamification is a growing phenomenon of interest to both practitioners and researchers. There remains, however, uncertainty about the contours of the field. Defining gamification as “the process of making activities more game-like” focuses on the crucial space between the components that make up games and the holistic experience of gamefulness. It better fits real-world examples and connects gamification with the literature on persuasive design.", "title": "" }, { "docid": "neg:1840244_14", "text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "title": "" }, { "docid": "neg:1840244_15", "text": "Objective: Mastitis is one of the most costly diseases in dairy cows, which greatly decreases milk production. Use of antibiotics in cattle leads to antibiotic-resistance of mastitis-causing bacteria. The present study aimed to investigate synergistic effect of silver nanoparticles (AgNPs) with neomycin or gentamicin antibiotic on mastitis-causing Staphylococcus aureus. Materials and Methods: In this study, 46 samples of milk were taken from the cows with clinical and subclinical mastitis during the august-October 2015 sampling period. In addition to biochemical tests, nuc gene amplification by PCR was used to identify strains of Staphylococcus aureus. Disk diffusion test and microdilution were performed to determine minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC). Fractional Inhibitory Concentration (FIC) index was calculated to determine the interaction between a combination of AgNPs and each one of the antibiotics. Results: Twenty strains of Staphylococcus aureus were isolated from 46 milk samples and were confirmed by PCR. Based on disk diffusion test, 35%, 10% and 55% of the strains were respectively susceptible, moderately susceptible and resistant to gentamicin. In addition, 35%, 15% and 50% of the strains were respectively susceptible, moderately susceptible and resistant to neomycin. According to FIC index, gentamicin antibiotic and AgNPs had synergistic effects in 50% of the strains. Furthermore, neomycin antibiotic and AgNPs had synergistic effects in 45% of the strains. Conclusion: It could be concluded that a combination of AgNPs with either gentamicin or neomycin showed synergistic antibacterial properties in Staphylococcus aureus isolates from mastitis. In addition, some hypotheses were proposed to explain antimicrobial mechanism of the combination.", "title": "" }, { "docid": "neg:1840244_16", "text": "OBJECTIVE\nMaiming and death due to dog bites are uncommon but preventable tragedies. We postulated that patients admitted to a level I trauma center with dog bites would have severe injuries and that the gravest injuries would be those caused by pit bulls.\n\n\nDESIGN\nWe reviewed the medical records of patients admitted to our level I trauma center with dog bites during a 15-year period. We determined the demographic characteristics of the patients, their outcomes, and the breed and characteristics of the dogs that caused the injuries.\n\n\nRESULTS\nOur Trauma and Emergency Surgery Services treated 228 patients with dog bite injuries; for 82 of those patients, the breed of dog involved was recorded (29 were injured by pit bulls). Compared with attacks by other breeds of dogs, attacks by pit bulls were associated with a higher median Injury Severity Scale score (4 vs. 1; P = 0.002), a higher risk of an admission Glasgow Coma Scale score of 8 or lower (17.2% vs. 0%; P = 0.006), higher median hospital charges ($10,500 vs. $7200; P = 0.003), and a higher risk of death (10.3% vs. 0%; P = 0.041).\n\n\nCONCLUSIONS\nAttacks by pit bulls are associated with higher morbidity rates, higher hospital charges, and a higher risk of death than are attacks by other breeds of dogs. Strict regulation of pit bulls may substantially reduce the US mortality rates related to dog bites.", "title": "" }, { "docid": "neg:1840244_17", "text": "Attempts to train a comprehensive artificial intelligence capable of solving multiple tasks have been impeded by a chronic problem called catastrophic forgetting. Although simply replaying all previous data alleviates the problem, it requires large memory and even worse, often infeasible in real world applications where the access to past data is limited. Inspired by the generative nature of the hippocampus as a short-term memory system in primate brain, we propose the Deep Generative Replay, a novel framework with a cooperative dual model architecture consisting of a deep generative model (“generator”) and a task solving model (“solver”). With only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task. We test our methods in several sequential learning settings involving image classification tasks.", "title": "" }, { "docid": "neg:1840244_18", "text": "We examine the relationship between scholarly practice and participatory technologies and explore how such technologies invite and reflect the emergence of a new form of scholarship that we call Networked Participatory Scholarship: scholars’ participation in online social networks to share, reflect upon, critique, improve, validate, and otherwise develop their scholarship. We discuss emergent techno-cultural pressures that may influence higher education scholars to reconsider some of the foundational principles upon which scholarship has been established due to the limitations of a pre-digital world, and delineate how scholarship itself is changing with the emergence of certain tools, social behaviors, and cultural expectations associated with participatory technologies. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840244_19", "text": "According to the website AcronymFinder.com which is one of the world's largest and most comprehensive dictionaries of acronyms, an average of 37 new human-edited acronym definitions are added every day. There are 379,918 acronyms with 4,766,899 definitions on that site up to now, and each acronym has 12.5 definitions on average. It is a very important research topic to identify what exactly an acronym means in a given context for document comprehension as well as for document retrieval. In this paper, we propose two word embedding based models for acronym disambiguation. Word embedding is to represent words in a continuous and multidimensional vector space, so that it is easy to calculate the semantic similarity between words by calculating the vector distance. We evaluate the models on MSH Dataset and ScienceWISE Dataset, and both models outperform the state-of-art methods on accuracy. The experimental results show that word embedding helps to improve acronym disambiguation.", "title": "" } ]
1840245
Understanding compliance with internet use policy from the perspective of rational choice theory
[ { "docid": "pos:1840245_0", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" } ]
[ { "docid": "neg:1840245_0", "text": "Deep learning recently shows strong competitiveness to improve polar code decoding. However, suffering from prohibitive training and computation complexity, the conventional deep neural network (DNN) is only possible for very short code length. In this paper, the main problems of deep learning in decoding are well solved. We first present the multiple scaled belief propagation (BP) algorithm, aiming at obtaining faster convergence and better performance. Based on this, deep neural network decoder (NND) with low complexity and latency, is proposed for any code length. The training only requires a small set of zero codewords. Besides, its computation complexity is close to the original BP. Experiment results show that the proposed (64,32) NND with 5 iterations achieves even lower bit error rate (BER) than the 30-iteration conventional BP and (512, 256) NND also outperforms conventional BP decoder with same iterations. The hardware architecture of basic computation block is given and folding technique is also considered, saving about 50% hardware cost.", "title": "" }, { "docid": "neg:1840245_1", "text": "This paper presents a high SNR self-capacitance sensing 3D hover sensor that does not use panel offset cancelation blocks. Not only reducing noise components, but increasing the signal components together, this paper achieved a high SNR performance while consuming very low power and die-area. Thanks to the proposed separated structure between driving and sensing circuits of the self-capacitance sensing scheme (SCSS), the signal components are increased without using high-voltage MOS sensing amplifiers which consume big die-area and power and badly degrade SNR. In addition, since a huge panel offset problem in SCSS is solved exploiting the panel's natural characteristics, other costly resources are not required. Furthermore, display noise and parasitic capacitance mismatch errors are compressed. We demonstrate a 39dB SNR at a 1cm hover point under 240Hz scan rate condition with noise experiments, while consuming 183uW/electrode and 0.73mm2/sensor, which are the power per electrode and the die-area per sensor, respectively.", "title": "" }, { "docid": "neg:1840245_2", "text": "This paper proposes a model of selective attention for visual search tasks, based on a framework for sequential decision-making. The model is implemented using a fixed pan-tilt-zoom camera in a visually cluttered lab environment, which samples the environment at discrete time steps. The agent has to decide where to fixate next based purely on visual information, in order to reach the region where a target object is most likely to be found. The model consists of two interacting modules. A reinforcement learning module learns a policy on a set of regions in the room for reaching the target object, using as objective function the expected value of the sum of discounted rewards. By selecting an appropriate gaze direction at each step, this module provides top-down control in the selection of the next fixation point. The second module performs “within fixation” processing, based exclusively on visual information. Its purpose is twofold: to provide the agent with a set of locations of interest in the current image, and to perform the detection and identification of the target object. Detailed experimental results show that the number of saccades to a target object significantly decreases with the number of training epochs. The results also show the learned policy to find the target object is invariant to small physical displacements as well as object inversion.", "title": "" }, { "docid": "neg:1840245_3", "text": "Facial expression recognition is a useful feature in modern human computer interaction (HCI). In order to build efficient and reliable recognition systems, face detection, feature extraction and classification have to be robustly realised. Addressing the latter two issues, this work proposes a new method based on geometric and transient optical flow features and illustrates their comparison and integration for facial expression recognition. In the authors’ method, photogrammetric techniques are used to extract three-dimensional (3-D) features from every image frame, which is regarded as a geometric feature vector. Additionally, optical flow-based motion detection is carried out between consecutive images, what leads to the transient features. Artificial neural network and support vector machine classification results demonstrate the high performance of the proposed method. In particular, through the use of 3-D normalisation and colour information, the proposed method achieves an advanced feature representation for the accurate and robust classification of facial expressions.", "title": "" }, { "docid": "neg:1840245_4", "text": "The so-called “phishing” attacks are one of the important threats to individuals and corporations in today’s Internet. Combatting phishing is thus a top-priority, and has been the focus of much work, both on the academic and on the industry sides. In this paper, we look at this problem from a new angle. We have monitored a total of 19,066 phishing attacks over a period of ten months and found that over 90% of these attacks were actually replicas or variations of other attacks in the database. This provides several opportunities and insights for the fight against phishing: first, quickly and efficiently detecting replicas is a very effective prevention tool. We detail one such tool in this paper. Second, the widely held belief that phishing attacks are dealt with promptly is but an illusion. We have recorded numerous attacks that stay active throughout our observation period. This shows that the current prevention techniques are ineffective and need to be overhauled. We provide some suggestions in this direction. Third, our observation give a new perspective into the modus operandi of attackers. In particular, some of our observations suggest that a small group of attackers could be behind a large part of the current attacks. Taking down that group could potentially have a large impact on the phishing attacks observed today.", "title": "" }, { "docid": "neg:1840245_5", "text": "This paper considers a number of selection schemes commonly used in modern genetic algorithms. Specifically, proportionate reproduction, ranking selection, tournament selection, and Genitor (or «steady state\") selection are compared on the basis of solutions to deterministic difference or differential equations, which are verified through computer simulations. The analysis provides convenient approximate or exact solutions as well as useful convergence time and growth ratio estimates. The paper recommends practical application of the analyses and suggests a number of paths for more detailed analytical investigation of selection techniques.", "title": "" }, { "docid": "neg:1840245_6", "text": "Facial landmark detection plays a very important role in many facial analysis applications such as identity recognition, facial expression analysis, facial animation, 3D face reconstruction as well as facial beautification. With the recent advance of deep learning, the performance of facial landmark detection, including on unconstrained inthe-wild dataset, has seen considerable improvement. This paper presents a survey of deep facial landmark detection for 2D images and video. A comparative analysis of different face alignment approaches is provided as well as some future research directions.", "title": "" }, { "docid": "neg:1840245_7", "text": "Design and results of a 77 GHz FM/CW radar sensor based on a simple waveguide circuitry and a novel type of printed, low-profile, and low-loss antenna are presented. A Gunn VCO and a finline mixer act as transmitter and receiver, connected by two E-plane couplers. The folded reflector type antenna consists of a printed slot array and another planar substrate which, at the same time, provide twisting of the polarization and focussing of the incident wave. In this way, a folded low-profile, low-loss antenna can be realized. The performance of the radar is described, together with first results on a scanning of the antenna beam.", "title": "" }, { "docid": "neg:1840245_8", "text": "Memristors have extended their influence beyond memory to logic and in-memory computing. Memristive logic design, the methodology of designing logic circuits using memristors, is an emerging concept whose growth is fueled by the quest for energy efficient computing systems. As a result, many memristive logic families have evolved with different attributes, and a mature comparison among them is needed to judge their merit. This paper presents a framework for comparing logic families by classifying them on the basis of fundamental properties such as statefulness, proximity (from the memory array), and flexibility of computation. We propose metrics to compare memristive logic families using analytic expressions for performance (latency), energy efficiency, and area. Then, we provide guidelines for a holistic comparison of logic families and set the stage for the evolution of new logic families.", "title": "" }, { "docid": "neg:1840245_9", "text": "Knowledge bases (KB), both automatically and manually constructed, are often incomplete — many valid facts can be inferred from the KB by synthesizing existing information. A popular approach to KB completion is to infer new relations by combinatory reasoning over the information found along other paths connecting a pair of entities. Given the enormous size of KBs and the exponential number of paths, previous path-based models have considered only the problem of predicting a missing relation given two entities, or evaluating the truth of a proposed triple. Additionally, these methods have traditionally used random paths between fixed entity pairs or more recently learned to pick paths between them. We propose a new algorithm, MINERVA1, which addresses the much more difficult and practical task of answering questions where the relation is known, but only one entity. Since random walks are impractical in a setting with combinatorially many destinations from a start node, we present a neural reinforcement learning approach which learns how to navigate the graph conditioned on the input query to find predictive paths. Empirically, this approach obtains state-of-the-art results on several datasets, significantly outperforming prior methods.", "title": "" }, { "docid": "neg:1840245_10", "text": "Background: Multiple-Valued Logic (MVL) is the non-binary-valued system, in which more than two levels of information content are available, i.e., L>2. In modern technologies, the dual level binary logic circuits have normally been used. However, these suffer from several significant issues such as the interconnection considerations including the parasitics, area and power dissipation. The MVL circuits have been proved to be consisting of reduced circuitry and increased efficiency in terms of higher utilization of the circuit resources through multiple levels of voltage. Innumerable algorithms have been developed for designing such MVL circuits. Extended form is one of the algebraic techniques used in designing these MVL circuits. Voltage mode design has also been employed for constructing various types of MVL circuits. Novelty: This paper proposes a novel MVLTRANS inverter, designed using conventional CMOS and pass transistor logic based MVLPTL inverter. Binary to MVL Converter/Encoder and MVL to binary Decoder/Converter are also presented in the paper. In addition to the proposed decoder circuit, a 4-bit novel MVL Binary decoder circuit is also proposed. Tools Used: All these circuits are designed, implemented and verified using Cadence® Virtuoso tools using 180 nm technology library.", "title": "" }, { "docid": "neg:1840245_11", "text": "This paper describes a new prototype system for detecting the demeanor of patients in emergency situations using the Intel RealSense camera system [1]. It describes how machine learning, a support vector machine (SVM) and the RealSense facial detection system can be used to track patient demeanour for pain monitoring. In a lab setting, the application has been trained to detect four different intensities of pain and provide demeanour information about the patient's eyes, mouth, and agitation state. Its utility as a basis for evaluating the condition of patients in situations using video, machine learning and 5G technology is discussed.", "title": "" }, { "docid": "neg:1840245_12", "text": "Topic models for text corpora comprise a popular family of methods that have inspired many extensions to encode properties such as sparsity, interactions with covariates, and the gradual evolution of topics. In this paper, we combine certain motivating ideas behind variations on topic models with modern techniques for variational inference to produce a flexible framework for topic modeling that allows for rapid exploration of different models. We first discuss how our framework relates to existing models, and then demonstrate that it achieves strong performance, with the introduction of sparsity controlling the trade off between perplexity and topic coherence. We have released our code and preprocessing scripts to support easy future comparisons and exploration.", "title": "" }, { "docid": "neg:1840245_13", "text": "Anthropometry is a simple reliable method for quantifying body size and proportions by measuring body length, width, circumference (C), and skinfold thickness (SF). More than 19 sites for SF, 17 for C, 11 for width, and 9 for length have been included in equations to predict body fat percent with a standard error of estimate (SEE) range of +/- 3% to +/- 11% of the mean of the criterion measurement. Recent studies indicate that not only total body fat, but also regional fat and skeletal muscle, can be predicted from anthropometrics. Our Rosetta database supports the thesis that sex, age, ethnicity, and site influence anthropometric predictions; the prediction reliabilities are consistently higher for Whites than for other ethnic groups, and also by axial than by peripheral sites (biceps and calf). The reliability of anthropometrics depends on standardizing the caliper and site of measurement, and upon the measuring skill of the anthropometrist. A reproducibility of +/- 2% for C and +/- 10% for SF measurements usually is required to certify the anthropometrist.", "title": "" }, { "docid": "neg:1840245_14", "text": "— Recently, many researchers have focused on building dual handed static gesture recognition systems. Single handed static gestures, however, pose more recognition complexity due to the high degree of shape ambiguities. This paper presents a gesture recognition setup capable of recognizing and emphasizing the most ambiguous static single handed gestures. Performance of the proposed scheme is tested on the alphabets of American Sign Language (ASL). Segmentation of hand contours from image background is carried out using two different strategies; skin color as detection cue with RGB and YCbCr color spaces, and thresholding of gray level intensities. A novel, rotation and size invariant, contour tracing descriptor is used to describe gesture contours generated by each segmentation technique. Performances of k-Nearest Neighbor (k-NN) and multiclass Support Vector Machine (SVM) classification techniques are evaluated to classify a particular gesture. Gray level segmented contour traces classified by multiclass SVM achieve accuracy up to 80.8% on the most ambiguous gestures of ASL alphabets with overall accuracy of 90.1%.", "title": "" }, { "docid": "neg:1840245_15", "text": "Microcavites contribute to enhancing the optical efficiency and color saturation of an organic light emitting diode (OLED) display. A major tradeoff of the strong cavity effect is its apparent color shift, especially for RGB-based OLED displays, due to their mismatched angular intensity distributions. To mitigate the color shift, in this work we first analyze the emission spectrum shifts and angular distributions for the OLEDs with strong and weak cavities, both theoretically and experimentally. Excellent agreement between simulation and measurement is obtained. Next, we propose a systematic approach for RGB-OLED displays based on multi-objective optimization algorithms. Three objectives, namely external quantum efficiency (EQE), color gamut coverage, and angular color shift of primary and mixed colors, can be optimized simultaneously. Our optimization algorithm is proven to be effective for suppressing color shift while keeping a relatively high optical efficiency and wide color gamut. © 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement OCIS codes: (140.3948) Microcavity devices; (160.4890) Organic materials; (230.3670) Light-emitting diodes;", "title": "" }, { "docid": "neg:1840245_16", "text": "The field of Web development is entering the HTML5 and CSS3 era and JavaScript is becoming increasingly influential. A large number of JavaScript frameworks have been recently promoted. Practitioners applying the latest technologies need to choose a suitable JavaScript framework (JSF) in order to abstract the frustrating and complicated coding steps and to provide a cross-browser compatibility. Apart from benchmark suites and recommendation from experts, there is little research helping practitioners to select the most suitable JSF to a given situation. The few proposals employ software metrics on the JSF, but practitioners are driven by different concerns when choosing a JSF. As an answer to the critical needs, this paper is a call for action. It proposes a research design towards a comparative analysis framework of JSF, which merges researcher needs and practitioner needs.", "title": "" }, { "docid": "neg:1840245_17", "text": "In this paper we present syntactic characterization of temporal formulas that express various properties of interest in the verification of concurrent programs. Such a characterization helps us in choosing the right techniques for proving correctness with respect to these properties. The properties that we consider include safety properties, liveness properties and fairness properties. We also present algorithms for checking if a given temporal formula expresses any of these properties.", "title": "" }, { "docid": "neg:1840245_18", "text": "In this paper, we present a new approach for text localization in natural images, by discriminating text and non-text regions at three levels: pixel, component and text line levels. Firstly, a powerful low-level filter called the Stroke Feature Transform (SFT) is proposed, which extends the widely-used Stroke Width Transform (SWT) by incorporating color cues of text pixels, leading to significantly enhanced performance on inter-component separation and intra-component connection. Secondly, based on the output of SFT, we apply two classifiers, a text component classifier and a text-line classifier, sequentially to extract text regions, eliminating the heuristic procedures that are commonly used in previous approaches. The two classifiers are built upon two novel Text Covariance Descriptors (TCDs) that encode both the heuristic properties and the statistical characteristics of text stokes. Finally, text regions are located by simply thresholding the text-line confident map. Our method was evaluated on two benchmark datasets: ICDAR 2005 and ICDAR 2011, and the corresponding F-measure values are 0.72 and 0.73, respectively, surpassing previous methods in accuracy by a large margin.", "title": "" } ]
1840246
Lossy Image Compression with Compressive Autoencoders
[ { "docid": "pos:1840246_0", "text": "Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria—average log-likelihood, Parzen window estimates, and visual fidelity of samples—are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided.", "title": "" }, { "docid": "pos:1840246_1", "text": "Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multidimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.", "title": "" } ]
[ { "docid": "neg:1840246_0", "text": "Marine debris is listed among the major perceived threats to biodiversity, and is cause for particular concern due to its abundance, durability and persistence in the marine environment. An extensive literature search reviewed the current state of knowledge on the effects of marine debris on marine organisms. 340 original publications reported encounters between organisms and marine debris and 693 species. Plastic debris accounted for 92% of encounters between debris and individuals. Numerous direct and indirect consequences were recorded, with the potential for sublethal effects of ingestion an area of considerable uncertainty and concern. Comparison to the IUCN Red List highlighted that at least 17% of species affected by entanglement and ingestion were listed as threatened or near threatened. Hence where marine debris combines with other anthropogenic stressors it may affect populations, trophic interactions and assemblages.", "title": "" }, { "docid": "neg:1840246_1", "text": "The paper presents an analysis of the main mechanisms of decryption of SSL/TLS traffic. Methods and technologies for detecting malicious activity in encrypted traffic that are used by leading companies are also considered. Also, the approach for intercepting and decrypting traffic transmitted over SSL/TLS is developed, tested and proposed. The developed approach has been automated and can be used for remote listening of the network, which will allow to decrypt transmitted data in a mode close to real time.", "title": "" }, { "docid": "neg:1840246_2", "text": "A 34–40 GHz VCO fabricated in 65 nm digital CMOS technology is demonstrated in this paper. The VCO uses a combination of switched capacitors and varactors for tuning and has a maximum Kvco of 240 MHz/V. It exhibits a phase noise of better than −98 dBc/Hz @ 1-MHz offset across the band while consuming 12 mA from a 1.2-V supply, an FOMT of −182.1 dBc/Hz. A cascode buffer following the VCO consumes 11 mA to deliver 0 dBm LO signal to a 50Ω load.", "title": "" }, { "docid": "neg:1840246_3", "text": "Reinforcement learning has shown promise in learning policies that can solve complex problems. However, manually specifying a good reward function can be difficult, especially for intricate tasks. Inverse reinforcement learning offers a useful paradigm to learn the underlying reward function directly from expert demonstrations. Yet in reality, the corpus of demonstrations may contain trajectories arising from a diverse set of underlying reward functions rather than a single one. Thus, in inverse reinforcement learning, it is useful to consider such a decomposition. The options framework in reinforcement learning is specifically designed to decompose policies in a similar light. We therefore extend the options framework and propose a method to simultaneously recover reward options in addition to policy options. We leverage adversarial methods to learn joint reward-policy options using only observed expert states. We show that this approach works well in both simple and complex continuous control tasks and shows significant performance increases in one-shot transfer learning.", "title": "" }, { "docid": "neg:1840246_4", "text": "The cognitive modulation of pain is influenced by a number of factors ranging from attention, beliefs, conditioning, expectations, mood, and the regulation of emotional responses to noxious sensory events. Recently, mindfulness meditation has been found attenuate pain through some of these mechanisms including enhanced cognitive and emotional control, as well as altering the contextual evaluation of sensory events. This review discusses the brain mechanisms involved in mindfulness meditation-related pain relief across different meditative techniques, expertise and training levels, experimental procedures, and neuroimaging methodologies. Converging lines of neuroimaging evidence reveal that mindfulness meditation-related pain relief is associated with unique appraisal cognitive processes depending on expertise level and meditation tradition. Moreover, it is postulated that mindfulness meditation-related pain relief may share a common final pathway with other cognitive techniques in the modulation of pain.", "title": "" }, { "docid": "neg:1840246_5", "text": "The Iterated Prisoner’s Dilemma has guided research on social dilemmas for decades. However, it distinguishes between only two atomic actions: cooperate and defect. In real world prisoner’s dilemmas, these choices are temporally extended and different strategies may correspond to sequences of actions, reflecting grades of cooperation. We introduce a Sequential Prisoner’s Dilemma (SPD) game to better capture the aforementioned characteristics. In this work, we propose a deep multiagent reinforcement learning approach that investigates the evolution of mutual cooperation in SPD games. Our approach consists of two phases. The first phase is offline: it synthesizes policies with different cooperation degrees and then trains a cooperation degree detection network. The second phase is online: an agent adaptively selects its policy based on the detected degree of opponent cooperation. The effectiveness of our approach is demonstrated in two representative SPD 2D games: the Apple-Pear game and the Fruit Gathering game. Experimental results show that our strategy can avoid being exploited by exploitative opponents and achieve cooperation with cooperative opponents.", "title": "" }, { "docid": "neg:1840246_6", "text": "The adoption of the General Data Protection Regulation (GDPR) is a major concern for data controllers of the public and private sector, as they are obliged to conform to the new principles and requirements managing personal data. In this paper, we propose that the data controllers adopt the concept of the Privacy Level Agreement. We present a metamodel for PLAs to support privacy management, based on analysis of privacy threats, vulnerabilities and trust relationships in their Information Systems, whilst complying with laws and regulations, and we illustrate the relevance of the metamodel with the GDPR.", "title": "" }, { "docid": "neg:1840246_7", "text": "Dictionary methods for cross-language information retrieval give performance below that for mono-lingual retrieval. Failure to translate multi-term phrases has km shown to be one of the factors responsible for the errors associated with dictionary methods. First, we study the importance of phrasaI translation for this approach. Second, we explore the role of phrases in query expansion via local context analysis and local feedback and show how they can be used to significantly reduce the error associated with automatic dictionary translation.", "title": "" }, { "docid": "neg:1840246_8", "text": "1,2,3,4 Department of Information Technology, Matoshri Collage of Engineering & Reasearch Centre Eklahare, Nashik, India ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Waste management is one of the primary problem that the world faces irrespective of the case of developed or developing country. The key issue in the waste management is that the garbage bin at public places gets overflowed well in advance before the commencement of the next cleaning process. It in turn leads to various hazards such as bad odor & ugliness to that place which may be the root cause for spread of various diseases. To avoid all such hazardous scenario and maintain public cleanliness and health this work is mounted on a smart garbage system. The main theme of the work is to develop a smart intelligent garbage alert system for a proper garbage management .This paper proposes a smart alert system for garbage clearance by giving an alert signal to the municipal web server for instant cleaning of dustbin with proper verification based on level of garbage filling. This process is aided by the ultrasonic sensor which is interfaced with Arduino UNO to check the level of garbage filled in the dustbin and sends the alert to the municipal web server once if garbage is filled . After cleaning the dustbin, the driver confirms the task of emptying the garbage with the aid of RFID Tag. RFID is a computing technology that is used for verification process and in addition, it also enhances the smart garbage alert system by providing automatic identification of garbage filled in the dustbin and sends the status of clean-up to the server affirming that the work is done. The whole process is upheld by an embedded module integrated with RF ID and IOT Facilitation. The real time status of how waste collection is being done could be monitored and followed up by the municipality authority with the aid of this system. In addition to this the necessary remedial / alternate measures could be adapted. An Android application is developed and linked to a web server to intimate the alerts from the microcontroller to the urban office and to perform the remote monitoring of the cleaning process, done by the workers, thereby reducing the manual process of monitoring and verification. The notifications are sent to the Android application using Wi-Fi module.", "title": "" }, { "docid": "neg:1840246_9", "text": "Vein images generally appear darker with low contrast, which require contrast enhancement during preprocessing to design satisfactory hand vein recognition system. However, the modification introduced by contrast enhancement (CE) is reported to bring side effects through pixel intensity distribution adjustments. Furthermore, the inevitable results of fake vein generation or information loss occur and make nearly all vein recognition systems unconvinced. In this paper, a “CE-free” quality-specific vein recognition system is proposed, and three improvements are involved. First, a high-quality lab-vein capturing device is designed to solve the problem of low contrast from the view of hardware improvement. Then, a high quality lab-made database is established. Second, CFISH score, a fast and effective measurement for vein image quality evaluation, is proposed to obtain quality index of lab-made vein images. Then, unsupervised $K$ -means with optimized initialization and convergence condition is designed with the quality index to obtain the grouping results of the database, namely, low quality (LQ) and high quality (HQ). Finally, discriminative local binary pattern (DLBP) is adopted as the basis for feature extraction. For the HQ image, DLBP is adopted directly for feature extraction, and for the LQ one. CE_DLBP could be utilized for discriminative feature extraction for LQ images. Based on the lab-made database, rigorous experiments are conducted to demonstrate the effectiveness and feasibility of the proposed system. What is more, an additional experiment with PolyU database illustrates its generalization ability and robustness.", "title": "" }, { "docid": "neg:1840246_10", "text": "A method for the on-line calibration of a circuit board trace resistance at the output of a buck converter is described. The input current is measured with a precision resistor and processed to obtain a dc reference for the output current. The voltage drop across a trace resistance at the output is amplified with a gain that is adaptively adjusted to match the dc reference. This method is applied to obtain an accurate and high-bandwidth measurement of the load current in the modern microprocessor voltage regulator application (VRM), thus enabling an accurate dc load-line regulation as well as a fast transient response. Experimental results show an accuracy well within the tolerance band of this application, and exceeding all other popular methods.", "title": "" }, { "docid": "neg:1840246_11", "text": "Many icon taxonomy systems have been developed by researchers that organise icons based on their graphic elements. Most of these taxonomies classify icons according to how abstract or concrete they are. Categories however overlap and different researchers use different terminology, sometimes to describe what in essence is the same thing. This paper describes nine taxonomies and compares the terminologies they use. Aware of the lack of icon taxonomy systems in the field of icon design, the authors provide an overview of icon taxonomy and develop an icon taxonomy system that could bring practical benefits to the performance of computer related tasks.", "title": "" }, { "docid": "neg:1840246_12", "text": "In recent years methods of data analysis for point processes have received some attention, for example, by Cox & Lewis (1966) and Lewis (1964). In particular Bartlett (1963a, b) has introduced methods of analysis based on the point spectrum. Theoretical models are relatively sparse. In this paper the theoretical properties of a class of processes with particular reference to the point spectrum or corresponding covariance density functions are discussed. A particular result is a self-exciting process with the same second-order properties as a certain doubly stochastic process. These are not distinguishable by methods of data analysis based on these properties.", "title": "" }, { "docid": "neg:1840246_13", "text": "Interconnected embedded devices are increasingly used in various scenarios, including industrial control, building automation, or emergency communication. As these systems commonly process sensitive information or perform safety critical tasks, they become appealing targets for cyber attacks. A promising technique to remotely verify the safe and secure operation of networked embedded devices is remote attestation. However, existing attestation protocols only protect against software attacks or show very limited scalability. In this paper, we present the first scalable attestation protocol for interconnected embedded devices that is resilient to physical attacks. Based on the assumption that physical attacks require an adversary to capture and disable devices for some time, our protocol identifies devices with compromised hardware and software. Compared to existing solutions, our protocol reduces communication complexity and runtimes by orders of magnitude, precisely identifies compromised devices, supports highly dynamic and partitioned network topologies, and is robust against failures. We show the security of our protocol and evaluate it in static as well as dynamic network topologies. Our results demonstrate that our protocol is highly efficient in well-connected networks and robust to network disruptions.", "title": "" }, { "docid": "neg:1840246_14", "text": "BACKGROUND\nThe aim of this paper was to summarise the anatomical knowledge on the subject of the maxillary nerve and its branches, and to show the clinical usefulness of such information in producing anaesthesia in the region of the maxilla.\n\n\nMATERIALS AND METHODS\nA literature search was performed in Pubmed, Scopus, Web of Science and Google Scholar databases, including studies published up to June 2014, with no lower data limit.\n\n\nRESULTS\nThe maxillary nerve (V2) is the middle sized branch of the trigeminal nerve - the largest of the cranial nerves. The V2 is a purely sensory nerve supplying the maxillary teeth and gingiva, the adjoining part of the cheek, hard and soft palate mucosa, pharynx, nose, dura mater, skin of temple, face, lower eyelid and conjunctiva, upper lip, labial glands, oral mucosa, mucosa of the maxillary sinus, as well as the mobile part of the nasal septum. The branches of the maxillary nerve can be divided into four groups depending on the place of origin i.e. in the cranium, in the sphenopalatine fossa, in the infraorbital canal, and on the face.\n\n\nCONCLUSIONS\nThis review summarises the data on the anatomy and variations of the maxillary nerve and its branches. A thorough understanding of the anatomy will allow for careful planning and execution of anaesthesiological and surgical procedures involving the maxillary nerve and its branches.", "title": "" }, { "docid": "neg:1840246_15", "text": "Government agencies are investing a considerable amount of resources into improving security systems as result of recent terrorist events that dangerously exposed flaws and weaknesses in today’s safety mechanisms. Badge or password-based authentication procedures are too easy to hack. Biometrics represents a valid alternative but they suffer of drawbacks as well. Iris scanning, for example, is very reliable but too intrusive; fingerprints are socially accepted, but not applicable to non-consentient people. On the other hand, face recognition represents a good compromise between what’s socially acceptable and what’s reliable, even when operating under controlled conditions. In last decade, many algorithms based on linear/nonlinear methods, neural networks, wavelets, etc. have been proposed. Nevertheless, Face Recognition Vendor Test 2002 shown that most of these approaches encountered problems in outdoor conditions. This lowered their reliability compared to state of the art biometrics. This paper provides an ‘‘ex cursus’’ of recent face recognition research trends in 2D imagery and 3D model based algorithms. To simplify comparisons across different approaches, tables containing different collection of parameters (such as input size, recognition rate, number of addressed problems) are provided. This paper concludes by proposing possible future directions. 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840246_16", "text": "A member of the Liliaceae family, garlic ( Allium sativum) is highly regarded throughout the world for both its medicinal and culinary value. Early men of medicine such as Hippocrates, Pliny and Aristotle encouraged a number of therapeutic uses for this botanical. Today, it is commonly used in many cultures as a seasoning or spice. Garlic also stands as the second most utilized supplement. With its sulfur containing compounds, high trace mineral content, and enzymes, garlic has shown anti-viral, anti-bacterial, anti-fungal and antioxidant abilities. Diseases that may be helped or prevented by garlic’s medicinal actions include Alzheimer’s Disease, cancer, cardiovascular disease (including atherosclerosis, strokes, hypertension, thrombosis and hyperlipidemias) children’s conditions, dermatologic applications, stress, and infections. Some research points to possible benefits in diabetes, drug toxicity, and osteoporosis.", "title": "" }, { "docid": "neg:1840246_17", "text": "In this paper, we propose a system, that automatically transfers human body motion captured from an ordinary video camera to an unknown 3D character mesh. In our system, no manual intervention is required for specifying the internal skeletal structure or defining how the mesh surfaces deform. A sparse graph is generated from the input polygons based on their connectivity and geometric distributions. To estimate articulated body parts in the video, a progressive particle filter is used for identifying correspondences. We anticipate our proposed system can bring animation to a new audience with a more intuitive user interface.", "title": "" }, { "docid": "neg:1840246_18", "text": "The knowledge-based theory of the firm suggests that knowledge is the organizational asset that enables sustainable competitive advantage in hypercompetitive environments. The emphasis on knowledge in today’s organizations is based on the assumption that barriers to the transfer and replication of knowledge endow it with strategic importance. Many organizations are developing information systems designed specifically to facilitate the sharing and integration of knowledge. Such systems are referred to as Knowledge Management System (KMS). Because KMS are just beginning to appear in organizations, little research and field data exists to guide the development and implementation of such systems or to guide expectations of the potential benefits of such systems. This study provides an analysis of current practices and outcomes of KMS and the nature of KMS as they are evolving in fifty organizations. The findings suggest that interest in KMS across a variety of industries is very high, the technological foundations are varied, and the major", "title": "" }, { "docid": "neg:1840246_19", "text": "Using data from a national probability sample of heterosexual U.S. adults (N02,281), the present study describes the distribution and correlates of men’s and women’s attitudes toward transgender people. Feeling thermometer ratings of transgender people were strongly correlated with attitudes toward gay men, lesbians, and bisexuals, but were significantly less favorable. Attitudes toward transgender people were more negative among heterosexual men than women. Negative attitudes were associated with endorsement of a binary conception of gender; higher levels of psychological authoritarianism, political conservatism, and anti-egalitarianism, and (for women) religiosity; and lack of personal contact with sexual minorities. In regression analysis, sexual prejudice accounted for much of the variance in transgender attitudes, but respondent gender, educational level, authoritarianism, anti-egalitarianism, and (for women) religiosity remained significant predictors with sexual prejudice statistically controlled. Implications and directions for future research on attitudes toward transgender people are discussed.", "title": "" } ]
1840247
RoadGraph - Graph based environmental modelling and function independent situation analysis for driver assistance systems
[ { "docid": "pos:1840247_0", "text": "The 2007 DARPA Urban Challenge afforded the golden opportunity for the Technische Universität Braunschweig to demonstrate its abilities to develop an autonomously driving vehicle to compete with the world’s best competitors. After several stages of qualification, our team CarOLO qualified early for the DARPA Urban Challenge Final Event and was among only eleven teams from initially 89 competitors to compete in the final. We had the ability to work together in a large group of experts, each contributing his expertise in his discipline, and significant organisational, financial and technical support by local sponsors who helped us to become the best non-US team. In this report, we describe the 2007 DARPA Urban Challenge, our contribution ”Caroline”, the technology and algorithms along with her performance in the DARPA Urban Challenge Final Event on November 3, 2007. M. Buehler et al. (Eds.): The DARPA Urban Challenge, STAR 56, pp. 441–508. springerlink.com c © Springer-Verlag Berlin Heidelberg 2009 442 F.W. Rauskolb et al. 1 Motivation and Introduction Focused research is often centered around interesting challenges and awards. The airplane industry started off with awards for the first flight over the British Channel as well as the Atlantic Ocean. The Human Genome Project, the RoboCups and the series of DARPA Grand Challenges for autonomous vehicles serve this very same purpose to foster research and development in a particular direction. The 2007 DARPA Urban Challenge is taking place to boost development of unmanned vehicles for urban areas. Although there is an obvious direct benefit for DARPA and the U.S. government, there will also be a large number of spin-offs in technologies, tools and engineering techniques, both for autonomous vehicles, but also for intelligent driver assistance. An intelligent driver assistance function needs to be able to understand the surroundings of the car, evaluate potential risks and help the driver to behave correctly, safely and, in case it is desired, also efficiently. These topics do not only affect ordinary cars, but also buses, trucks, convoys, taxis, special-purpose vehicles in factories, airports and more. It will take a number of years before we will have a mass market for cars that actively and safely protect the passenger and the surroundings, like pedestrians, from accidents in any situation. Intelligent functions in vehicles are obviously complex systems. Large issues in this project where primarily the methods, techniques and tools for the development of such a highly critical, reliable and complex system. Adapting and combining methods from different engineering disciplines were an important prerequisite for our success. For a stringent deadline-oriented development of such a system it is necessary to rely on a clear structure of the project, a dedicated development process and an efficient engineering that fits the project’s needs. Thus, we did not only concentrate on the single software modules of our autonomously driving vehicle named Caroline, but also on the process itself. We furthermore needed an appropriate tool suite that allowed us to run the development and in particular the testing process as efficient as possible. This includes a simulator allowing us to simulate traffic situations and therefore achieve a sufficient coverage of test situations that would have been hardly to conduct in reality. Only a good collaboration between the participating disciplines allowed us to develop Caroline in time to achieve such a good result in the 2007 DARPA Urban Challenge. In the long term, our goal was not only to participate in a competition but also to establish a sound basis for further research on how to enhance vehicle safety by implementing new technologies to provide vehicle users with reliable and robust driver assistance systems, e.g. by giving special attention on technology for sensor data fusion and robust and reliable system architectures including new methods for simulation and testing. Therefore, the 2007 DARPA Urban Challenge provided a golden opportunity to combine several expertise from several fields of science and engineering. For this purpose, the interdisciplinary team CarOLO had been founded, which drew its members Caroline: An Autonomously Driving Vehicle for Urban Environments 443 from five different institutes. In addition, the team received support from a consortium of national and international companies. In this paper, we firstly introduce the 2007 DARPA Urban Challenge and derive the basic requirements for the car from its rules in section 2. Section 3 describes the overall architecture of the system, which is detailed in section 4 describing sensor fusion, vision, artificial intelligence, vehicle control and along with safety concepts. Section 5 describes the overall development process, discusses quality assurance and the simulator used to achieve sufficient testing coverage in detail. Section 6 finally describes the evaluation of Caroline, namely the performance during the National Qualification Event and the DARPA Urban Challenge Final Event in Victorville, California, the results we found and the conclusions to draw from our performance. 2 2007 DARPA Urban Challenge The 2007 DARPA Urban Challenge is the continuation of the well-known Grand Challenge events of 2004 and 2005, which were entitled ”Barstow to Primm” and ”Desert Classic”. To continue the tradition of having names reflect the actual task, DARPA named the 2007 event ”Urban Challenge”, announcing with it the nature of the mission to be accomplished. The 2004 course, as shown in Fig. 1, led from the Barstow, California (A) to Primm, Nevada (B) and had a total length of about 142 miles. Prior to the main event, DARPA held a qualification, inspection and demonstration for each robot. Nevertheless, none of the original fifteen vehicles managed to come even close to the goal of successfully completing the course. With 7.4 miles as the farthest distance travelled, the challenge ended very disappointingly and no one won the $1 million cash prize. Thereafter, the DARPA program managers heightened the barriers for entering the 2005 challenge significantly. They also modified the entire quality inspection process to one involving a step-by-step application process, including a video of the car in action and the holding of so-called Site Visits, which involved the visit of DARPA officials to team-chosen test sites. The rules for these Site Visits were very strict, e.g. determining exactly how the courses had to be equipped and what obstacles had to be available. From initially 195 teams, 118 were selected for site visits and 43 had finally made it into the National Qualification Event at the California Speedway in Ontario, California. The NQE consisted of several tasks to be completed and obstacles to overcome autonomously by the participating vehicles, including tank traps, a tunnel, speed bumps, stationary cars to pass and many more. On October 5, 2005, DARPA announced the 23 teams that would participate in the final event. The course started in Primm, Nevada, where the 2004 challenge should have ended. With a total distance of 131.6 miles and several natural obstacles, the course was by no means easier than the one from the year before. At the end, five teams completed it and the rest did significantly 444 F.W. Rauskolb et al. Fig. 1. 2004 DARPA Grand Challenge Area between Barstow, CA (A) and Primm, NV (B). better as the teams the year before. The Stanford Racing Team was awarded the $2 million first prize. In 2007, DARPA wanted to increase the difficulty of the requirements, in order to meet the goal set by Congress and the Department of Defense that by 2015 a third of the Army’s ground combat vehicles would operate unmanned. Having already proved the feasibility of crossing a desert and overcome natural obstacles without human intervention, now a tougher task had to be mastered. As the United States Armed Forces are currently facing serious challenges in urban environments, the choice of such seemed logical. DARPA used the good experience and knowledge gained from the first and second Grand Challenge event to define the tasks for the autonomous vehicles. The 2007 DARPA Urban Challenge took place in Vicorville, CA as depicted in Fig. 2. The Technische Universität Braunschweig started in June 2006 as a newcomer in the 2007 DARPA Urban Challenge. Significantly supported by industrial partners, five institutes from the faculties of computer science and mechanical and electrical engineering equipped a 2006 Volkswagen Passat station wagon named ”Caroline” to participate in the DARPA Urban Challenge as a ”Track B” competitor. Track B competitors did not receive any financial support from the DARPA compared to ”Track A” competitors. Track A teams had to submit technical proposals to get technology development funding awards up to $1,000,000 in fall 2006. Track B teams had to provide a 5 minutes video demonstrating the vehicles capabilities in April 2007. Using these videos, DARPA selected 53 teams of the initial 89 teams that advanced to the next stage in the Caroline: An Autonomously Driving Vehicle for Urban Environments 445 Fig. 2. 2007 DARPA Grand Challenge Area in Victorville, CA. qualification process, the ”Site Visit” as already conducted in the 2005 Grand Challenge. Team CarOLO got an invitation for a Site Visit that had to take place in the United States. Therefore, team CarOLO accepted gratefully an offer from the Southwest Research Insitute in San Antonio, Texas providing a location for the Site Visit. On June 20, Caroline proved that she was ready for the National Qualification Event in fall 2007. Against great odds, she showed her abilities to the DARPA officials when a huge thunderstorm hit San Antonio during the Site Visit. The tasks to complete included the correct handling of intersection precedence, passing of vehicles, lane keeping and general safe behaviour. Afte", "title": "" } ]
[ { "docid": "neg:1840247_0", "text": "We describe an algorithm for approximate inference in graphical models based on Hölder’s inequality that provides upper and lower bounds on common summation problems such as computing the partition function or probability of evidence in a graphical model. Our algorithm unifies and extends several existing approaches, including variable elimination techniques such as minibucket elimination and variational methods such as tree reweighted belief propagation and conditional entropy decomposition. We show that our method inherits benefits from each approach to provide significantly better bounds on sum-product tasks.", "title": "" }, { "docid": "neg:1840247_1", "text": "Computing in schools has gained momentum in the last two years resulting in GCSEs in Computing and teachers looking to up skill from Digital Literacy (ICT). For many students the subject of computer science concerns software code but writing code can be challenging, due to specific requirements on syntax and spelling with new ways of thinking required. Not only do many undergraduate students lack these ways of thinking, but there is a general misrepresentation of computing in education. Were computing taught as a more serious subject like science and mathematics, public understanding of the complexities of computer systems would increase, enabling those not directly involved with IT make better informed decisions and avoid incidents such as over budget and underperforming systems. We present our exploration into teaching a variety of computing skills, most significantly \"computational thinking\", to secondary-school age children through three very different engagements. First, we discuss Print craft, in which participants learn about computer-aided design and additive manufacturing by designing and building a miniature world from scratch using the popular open-world game Mine craft and 3D printers. Second, we look at how students can get a new perspective on familiar technology with a workshop using App Inventor, a graphical Android programming environment. Finally, we look at an ongoing after school robotics club where participants face a number of challenges of their own making as they design and create a variety of robots using a number of common tools such as Scratch and Arduino.", "title": "" }, { "docid": "neg:1840247_2", "text": "With a number of emerging biometric applications there is a dire need of less expensive authentication technique which can authenticate even if the input image is of low resolution and low quality. Foot biometric has both the physiological and behavioral characteristics still it is an abandoned field. The reason behind this is, it involves removal of shoes and socks while capturing the image and also dirty feet makes the image noisy. Cracked heels is also a reason behind noisy images. Physiological and behavioral biometric characteristics makes it a great alternative to computational intensive algorithms like fingerprint, palm print, retina or iris scan [1] and face. On one hand foot biometric has minutia features which is considered totally unique. The uniqueness of minutiae feature is already tested in fingerprint analysis [2]. On the other hand it has geometric features like hand geometry which also give satisfactory results in recognition. We can easily apply foot biometrics at those places where people inherently remove their shoes, like at holy places such as temples and mosque people remove their shoes before entering from the perspective of faith, and also remove shoes at famous monuments such as The Taj Mahal, India from the perspective of cleanliness and preservation. Usually these are the places with a strong foot fall and high risk security due to chaotic crowd. Most of the robbery, theft, terrorist attacks, are happening at these places. One very fine example is Akshardham attack in September 2002. Hence we can secure these places using low cost security algorithms based on footprint recognition.", "title": "" }, { "docid": "neg:1840247_3", "text": "Molecular analysis of the 16S rDNA of the intestinal microbiota of whiteleg shrimp Litopenaeus vannamei was examined to investigate the effect of a Bacillus mix (Bacillus endophyticus YC3-b, Bacillus endophyticus C2-2, Bacillus tequilensisYC5-2) and the commercial probiotic (Alibio(®)) on intestinal bacterial communities and resistance to Vibrio infection. PCR and single strain conformation polymorphism (SSCP) analyses were then performed on DNA extracted directly from guts. Injection of shrimp with V. parahaemolyticus at 2.5 × 10(5) CFU g(-1) per shrimp followed 168 h after inoculation with Bacillus mix or the Alibio probiotic or the positive control. Diversity analyses showed that the bacterial community resulting from the Bacillus mix had the highest diversity and evenness and the bacterial community of the control had the lowest diversity. The bacterial community treated with probiotics mainly consisted of α- and γ-proteobacteria, fusobacteria, sphingobacteria, and flavobacteria, while the control mainly consisted of α-proteobacteria and flavobacteria. Differences were grouped using principal component analyses of PCR-SSCP of the microbiota, according to the time of inoculation. In Vibrio parahaemolyticus-infected shrimp, the Bacillus mix (~33 %) induced a significant increase in survival compared to Alibio (~21 %) and the control (~9 %). We conclude that administration of the Bacillus mix induced modulation of the intestinal microbiota of L. vannamei and increased its resistance to V. parahaemolyticus.", "title": "" }, { "docid": "neg:1840247_4", "text": "Affine moment invariant (AMI) is a kind of hand-crafted image feature, which is invariant to affine transformations. This property is precisely what the standard convolution neural network (CNN) is difficult to achieve. In this letter, we present a kind of network architecture to introduce AMI into CNN, which is called AMI-Net. We achieved this by calculating AMI on the feature maps of the hidden layers. These AMIs will be concatenated with the standard CNN's FC layer to determine the network's final output. By calculating AMI on the feature maps, we can not only extend the dimension of AMIs, but also introduce affine transformation invariant into CNN. Two network architectures and training strategies of AMI-Net are illuminated, one is two-stage, and the other is end-to-end. To prove the effectiveness of the AMI-Net, several experiments have been conducted on common image datasets, MNIST, MNIST-rot, affNIST, SVHN, and CIFAR-10. By comparing with the corresponding standard CNN, respectively, we verify the validity of AMI-net.", "title": "" }, { "docid": "neg:1840247_5", "text": "This short paper reports the method and the evaluation results of Casio and Shinshu University joint team for the ISBI Challenge 2017 – Skin Lesion Analysis Towards Melanoma Detection – Part 3: Lesion Classification hosted by ISIC. Our online validation score was 0.958 with melanoma classifier AUC 0.924 and seborrheic keratosis classifier AUC 0.993.", "title": "" }, { "docid": "neg:1840247_6", "text": "This paper presents a new bidirectional wireless power transfer (WPT) topology using current fed half bridge converter. Generally, WPT topology with current fed converter uses parallel LC resonant tank network in the transmitter side to compensate the reactive power. However, in medium power application this topology suffers a major drawback that the voltage stress in the inverter switches are considerably high due to high reactive power consumed by the loosely coupled coil. In the proposed topology this is mitigated by adding a suitably designed capacitor in series with the transmitter coil. Both during grid to vehicle and vehicle to grid operations the power flow is controlled through variable switching frequency to achieve extended ZVS of the inverter switches. Detail analysis and converter design procedure is presented for both grid to vehicle and vehicle to grid operations. A 1.2kW lab-prototype is developed and experimental results are presented to verify the analysis.", "title": "" }, { "docid": "neg:1840247_7", "text": "A three-way power divider with ultra wideband behavior is presented. It has a compact size with an overall dimension of 20 mm times 30 mm. The proposed divider utilizes broadside coupling via multilayer microstrip/slot transitions of elliptical shape. The simulated and measured results show that the proposed device has 4.77 plusmn 1 dB insertion loss, better than 17 dB return loss, and better than 15 dB isolation across the frequency band 3.1 to 10.6 GHz.", "title": "" }, { "docid": "neg:1840247_8", "text": "BACKGROUND\nCitation screening is time consuming and inefficient. We sought to evaluate the performance of Abstrackr, a semi-automated online tool for predictive title and abstract screening.\n\n\nMETHODS\nFour systematic reviews (aHUS, dietary fibre, ECHO, rituximab) were used to evaluate Abstrackr. Citations from electronic searches of biomedical databases were imported into Abstrackr, and titles and abstracts were screened and included or excluded according to the entry criteria. This process was continued until Abstrackr predicted and classified the remaining unscreened citations as relevant or irrelevant. These classification predictions were checked for accuracy against the original review decisions. Sensitivity analyses were performed to assess the effects of including case reports in the aHUS dataset whilst screening and the effects of using larger imbalanced datasets with the ECHO dataset. The performance of Abstrackr was calculated according to the number of relevant studies missed, the workload saving, the false negative rate, and the precision of the algorithm to correctly predict relevant studies for inclusion, i.e. further full text inspection.\n\n\nRESULTS\nOf the unscreened citations, Abstrackr's prediction algorithm correctly identified all relevant citations for the rituximab and dietary fibre reviews. However, one relevant citation in both the aHUS and ECHO reviews was incorrectly predicted as not relevant. The workload saving achieved with Abstrackr varied depending on the complexity and size of the reviews (9 % rituximab, 40 % dietary fibre, 67 % aHUS, and 57 % ECHO). The proportion of citations predicted as relevant, and therefore, warranting further full text inspection (i.e. the precision of the prediction) ranged from 16 % (aHUS) to 45 % (rituximab) and was affected by the complexity of the reviews. The false negative rate ranged from 2.4 to 21.7 %. Sensitivity analysis performed on the aHUS dataset increased the precision from 16 to 25 % and increased the workload saving by 10 % but increased the number of relevant studies missed. Sensitivity analysis performed with the larger ECHO dataset increased the workload saving (80 %) but reduced the precision (6.8 %) and increased the number of missed citations.\n\n\nCONCLUSIONS\nSemi-automated title and abstract screening with Abstrackr has the potential to save time and reduce research waste.", "title": "" }, { "docid": "neg:1840247_9", "text": "The world of e-commerce is reshaping marketing strategies based on the analysis of e-commerce data. Huge amounts of data are being collecting and can be analyzed for some discoveries that may be used as guidance for people sharing same interests but lacking experience. Indeed, recommendation systems are becoming an essential business strategy tool from just a novelty. Many large e-commerce web sites are already encapsulating recommendation systems to provide a customer friendly environment by helping customers in their decision-making process. A recommendation system learns from a customer behavior patterns and recommend the most valuable from available alternative choices. In this paper, we developed a two-stage algorithm using self-organizing map (SOM) and fuzzy k-means with an improved distance function to classify users into clusters. This will lead to have in the same cluster users who mostly share common interests. Results from the combination of SOM and fuzzy K-means revealed better accuracy in identifying user related classes or clusters. We validated our results using various datasets to check the accuracy of the employed clustering approach. The generated groups of users form the domain for transactional datasets to find most valuable products for customers.", "title": "" }, { "docid": "neg:1840247_10", "text": "This paper aims to determine which is the best human action recognition method based on features extracted from RGB-D devices, such as the Microsoft Kinect. A review of all the papers that make reference to MSR Action3D, the most used dataset that includes depth information acquired from a RGB-D device, has been performed. We found that the validation method used by each work differs from the others. So, a direct comparison among works cannot be made. However, almost all the works present their results comparing them without taking into account this issue. Therefore, we present different rankings according to the methodology used for the validation in orden to clarify the existing confusion.", "title": "" }, { "docid": "neg:1840247_11", "text": "Currently, two frameworks of causal reasoning compete: Whereas dependency theories focus on dependencies between causes and effects, dispositional theories model causation as an interaction between agents and patients endowed with intrinsic dispositions. One important finding providing a bridge between these two frameworks is that failures of causes to generate their effects tend to be differentially attributed to agents and patients regardless of their location on either the cause or the effect side. To model different types of error attribution, we augmented a causal Bayes net model with separate error sources for causes and effects. In several experiments, we tested this new model using the size of Markov violations as the empirical indicator of differential assumptions about the sources of error. As predicted by the model, the size of Markov violations was influenced by the location of the agents and was moderated by the causal structure and the type of causal variables.", "title": "" }, { "docid": "neg:1840247_12", "text": "We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.", "title": "" }, { "docid": "neg:1840247_13", "text": "Stochastic binary hidden units in a multi-layer perceptron (MLP) network give at least three potential benefits when compared to deterministic MLP networks. (1) They allow to learn one-to-many type of mappings. (2) They can be used in structured prediction problems, where modeling the internal structure of the output is important. (3) Stochasticity has been shown to be an excellent regularizer, which makes generalization performance potentially better in general. However, training stochastic networks is considerably more difficult. We study training using M samples of hidden activations per input. We show that the case M = 1 leads to a fundamentally different behavior where the network tries to avoid stochasticity. We propose two new estimators for the training gradient and propose benchmark tests for comparing training algorithms. Our experiments confirm that training stochastic networks is difficult and show that the proposed two estimators perform favorably among all the five known estimators.", "title": "" }, { "docid": "neg:1840247_14", "text": "This study develops a crowdfunding sponsor typology based on sponsors’ motivations for participating in a project. Using a two by two crowdfunding motivation framework, we analyzed six relevant funding motivations—interest, playfulness, philanthropy, reward, relationship, and recognition—and identified four types of crowdfunding sponsors: angelic backer, reward hunter, avid fan, and tasteful hermit. They are profiled in terms of the antecedents and consequences of funding motivations. Angelic backers are similar in some ways to traditional charitable donors while reward hunters are analogous to market investors; thus they differ in their approach to crowdfunding. Avid fans comprise the most passionate sponsor group, and they are similar to members of a brand community. Tasteful hermits support their projects as actively as avid fans, but they have lower extrinsic and others-oriented motivations. The results show that these sponsor types reflect the nature of crowdfunding as a new form of co-creation in the E-commerce context. 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840247_15", "text": "This paper describes a blueprint for the proper design of a Fair Internet Regulation System (IRS), i.e. a system that will be implemented in national level and will encourage the participation of Internet users in enriching and correcting its “behavior”. Such a system will be easier to be accepted by Western democracies willing to implement a fair Internet regulation policy.", "title": "" }, { "docid": "neg:1840247_16", "text": "Automatic on-line signature verification is an intriguing intellectual challenge with many practical applications. I review the context of this problem and then describe my own approach to it, which breaks with tradition by relying primarily on the detailed shape of a signature for its automatic verification, rather than relying primarily on the pen dynamics during the production of the signature. I propose a robust, reliable, and elastic localshape-based model for handwritten on-line curves; this model is generated by first parameterizing each on-line curve over its normalized arc-length and then representing along the length of the curve, in a moving coordinate frame, measures of the curve within a sliding window that are analogous to the position of the center of mass, the torque exerted by a force, and the moments of inertia of a mass distribution about its center of mass. Further, I suggest the weighted and biased harmonic mean as a graceful mechanism of combining errors from multiple models of which at least one model is applicable but not necessarily more than one model is applicable, recommending that each signature be represented by multiple models, these models, perhaps, local and global, shape based and dynamics based. Finally, I outline a signature-verification algorithm that I have implemented and tested successfully both on databases and in live experiments.", "title": "" }, { "docid": "neg:1840247_17", "text": "While offering unique performance and energy-saving advantages, the use of Field-Programmable Gate Arrays (FPGAs) for database acceleration has demanded major concessions from system designers. Either the programmable chips have been used for very basic application tasks (such as implementing a rigid class of selection predicates) or their circuit definition had to be completely recompiled at runtime—a very CPU-intensive and time-consuming effort.\n This work eliminates the need for such concessions. As part of our XLynx implementation—an FPGA-based XML filter—we present skeleton automata, which is a design principle for data-intensive hardware circuits that offers high expressiveness and quick reconfiguration at the same time. Skeleton automata provide a generic implementation for a class of finite-state automata. They can be parameterized to any particular automaton instance in a matter of microseconds or less (as opposed to minutes or hours for complete recompilation).\n We showcase skeleton automata based on XML projection [Marian and Siméon 2003], a filtering technique that illustrates the feasibility of our strategy for a real-world and challenging task. By performing XML projection in hardware and filtering data in the network, we report on performance improvements of several factors while remaining nonintrusive to the back-end XML processor (we evaluate XLynx using the Saxon engine).", "title": "" }, { "docid": "neg:1840247_18", "text": "Recent deep learning approaches to Natural Language Generation mostly rely on sequence-to-sequence models. In these approaches, the input is treated as a sequence whereas in most cases, input to generation usually is either a tree or a graph. In this paper, we describe an experiment showing how enriching a sequential input with structural information improves results and help support the generation of paraphrases.", "title": "" }, { "docid": "neg:1840247_19", "text": "Three experiments examined the impact of excessive violence in sport video games on aggression-related variables. Participants played either a nonviolent simulation-based sports video game (baseball or football) or a matched excessively violent sports video game. Participants then completed measures assessing aggressive cognitions (Experiment 1), aggressive affect and attitudes towards violence in sports (Experiment 2), or aggressive behavior (Experiment 3). Playing an excessively violent sports video game increased aggressive affect, aggressive cognition, aggressive behavior, and attitudes towards violence in sports. Because all games were competitive, these findings indicate that violent content uniquely leads to increases in several aggression-related variables, as predicted by the General Aggression Model and related social–cognitive models. 2009 Elsevier Inc. All rights reserved. In 2002, ESPN aired an investigative piece examining the impact of excessively violent sports video games on youth’s attitudes towards sports (ESPN, 2002). At the time, Midway Games produced several sports games (e.g., NFL Blitz, MLB Slugfest, and NHL Hitz) containing excessive and unrealistic violence, presumably to appeal to non-sport fan video game players. These games were officially licensed by the National Football League, Major League Baseball, and the National Hockey League, which permitted Midway to implement team logos, players’ names, and players’ likenesses into the games. Within these games, players control real-life athletes and can perform excessively violent behaviors on the electronic field. The ESPN program questioned why the athletic leagues would allow their license to be used in this manner and what effect these violent sports games had on young players. Then in December 2004, the NFL granted exclusive license rights to EA Sports (ESPN.com, 2005). In response, Midway Games began publishing a more violent, grittier football game based on a fictitious league. The new football video game, which is rated appropriate only for people seventeen and older, features fictitious players engaging in excessive violent behaviors on and off the field, drug use, sex, and gambling (IGN.com, 2005). Violence in video games has been a major social issue, not limited to violence in sports video games. Over 85% of the games on ll rights reserved. ychology, Iowa State Univernited States. Fax: +1 515 294 the market contain some violence (Children Now, 2001). Approximately half of video games include serious violent actions toward other game characters (Children Now, 2001; Dietz, 1998; Dill, Gentile, Richter, & Dill, 2005). Indeed, Congressman Joe Baca of California recently introduced Federal legislation to require that violent video games contain a warning label about their link to aggression (Baca, 2009). Since 1999, the amount of daily video game usage by youth has nearly doubled (Roberts, Foehr, & Rideout, 2005). Almost 60% of American youth from ages 8 to 18 report playing video games on ‘‘any given day” and 30% report playing for more than an average of an hour a day (Roberts et al., 2005). Video game usage is high in youth regardless of sex, race, parental education, or household income (Roberts et al., 2005). Competition-only versus violent-content hypotheses Recent meta-analyses (e.g., Anderson et al., 2004, submitted for publication) have shown that violent video game exposure increases physiological arousal, aggressive affect, aggressive cognition, and aggressive behavior. Other studies link violent video game play to physiological desensitization to violence (e.g., Bartholow, Bushman, & Sestir, 2006; Carnagey, Anderson, & Bushman, 2007). Particularly interesting is the recent finding that violent video game play can increase aggression in both short and long term contexts. Besides the empirical evidence, there are strong theoretical reasons from the cognitive, social, and personality domains to expect 732 C.A. Anderson, N.L. Carnagey / Journal of Experimental Social Psychology 45 (2009) 731–739 violent video game effects on aggression-related variables. However, currently there are two competing hypotheses as to how violent video games increases aggression: the violent-content hypothesis and the competition-only hypothesis. General Aggression Model and the violent-content hypothesis The General Aggression Model (GAM) is an integration of several prior models of aggression (e.g., social learning theory, cognitive neoassociation) and has been detailed in several publications (Anderson & Bushman, 2002; Anderson & Carnagey, 2004; Anderson, Gentile, & Buckley, 2007; Anderson & Huesmann, 2003). GAM describes a cyclical pattern of interaction between the person and the environment. Input variables, such as provocation and aggressive personality, can affect decision processes and behavior by influencing one’s present internal state in at least one of three primary ways: by influencing current cognitions, affective state, and physiological arousal. That is, a specific input variable may directly influence only one, or two, or all three aspects of a person’s internal state. For example, uncomfortably hot temperature appears to increase aggression primarily by its direct impact on affective state (Anderson, Anderson, Dorr, DeNeve, & Flanagan, 2000). Of course, because affect, arousal, and cognition tend to influence each other, even input variables that primarily influence one aspect of internal state also tend to indirectly influence the other aspects. Although GAM is a general model and not specifically a model of media violence effects, it can easily be applied to media effects. Theoretically, violent media exposure might affect all three components of present internal state. Research has shown that playing violent video games can temporarily increase aggressive thoughts (e.g., Kirsh, 1998), affect (e.g., Ballard & Weist, 1996), and arousal (e.g., Calvert & Tan, 1994). Of course, nonviolent games also can increase arousal, and for this reason much prior work has focused on testing whether violent content can increase aggressive behavior even when physiological arousal is controlled. This usually is accomplished by selecting nonviolent games that are equally arousing (e.g., Anderson et al., 2004). Despite’s GAM’s primary focus on the current social episode, it is not restricted to short-term effects. With repeated exposure to certain types of stimuli (e.g., media violence, certain parenting practices), particular knowledge structures (e.g., aggressive scripts, attitudes towards violence) become chronically accessible. Over time, the individual employs these knowledge structures and occasionally receives environmental reinforcement for their usage. With time and repeated use, these knowledge structures gain strength and connections to other stimuli and knowledge structures, and therefore are more likely to be used in later situations. This accounts for the finding that repeatedly exposing children to media violence increases later aggression, even into adulthood (Anderson, Sakamoto, Gentile, Ihori, & Shibuya, 2008; Huesmann & Miller, 1994; Huesmann, Moise-Titus, Podolski, & Eron, 2003; Möller & Krahé, 2009; Wallenius & Punamaki, 2008). Such longterm effects result from the development, automatization, and reinforcement of aggression-related knowledge structures. In essence, the creation and automatization of these aggression-related knowledge structures and concomitant emotional desensitization changes the individual’s personality. For example, long-term consumers of violent media can become more aggressive in outlook, perceptual biases, attitudes, beliefs, and behavior than they were before the repeated exposure, or would have become without such exposure (e.g., Funk, Baldacci, Pasold, & Baumgardner, 2004; Gentile, Lynch, Linder, & Walsh, 2004; Krahé & Möller, 2004; Uhlmann & Swanson, 2004). In sum, GAM predicts that one way violent video games increase aggression is by the violent content increasing at least one of the aggression-related aspects of a person’s current internal state (short-term context), and over time increasing the chronic accessibility of aggression-related knowledge structures. This is the violent-content hypothesis. The competition hypothesis The competition hypothesis maintains that competitive situations stimulate aggressiveness. According to this hypothesis, many previous short-term (experimental) video game studies have found links between violent games and aggression not because of the violent content, but because violent video games typically involve competition, whereas nonviolent video games frequently are noncompetitive. The competitive aspect of video games might increase aggression by increasing arousal or by increasing aggressive thoughts or affect. Previous research has demonstrated that increases in physiological arousal can cause increases in aggression under some circumstances (Berkowitz, 1993). Competitive aspects of violent video games could also increase aggressive cognitions via links between aggressive and competition concepts (Anderson & Morrow, 1995; Deutsch, 1949, 1993). Thus, at a general level such competition effects are entirely consistent with GAM and with the violentcontent hypothesis. However, a strong version of the competition hypothesis states that violent content has no impact beyond its effects on competition and its sequela. This strong version, which we call the competition-only hypothesis, has not been adequately tested. Testing the competition-only hypothesis There has been little research conducted to examine the violent-content hypothesis versus the competition-only hypothesis (see Carnagey & Anderson, 2005 for one such example). To test these hypotheses against each other, one must randomly assign participants to play either violent or nonviolent video games, all of which are competitive. The use of sports video games meets this requirement and has other benefits. E", "title": "" } ]
1840248
Sex differences in response to visual sexual stimuli: a review.
[ { "docid": "pos:1840248_0", "text": "Men report more permissive sexual attitudes and behavior than do women. This experiment tested whether these differences might result from false accommodation to gender norms (distorted reporting consistent with gender stereotypes). Participants completed questionnaires under three conditions. Sex differences in self-reported sexual behavior were negligible in a bogus pipeline condition in which participants believed lying could be detected, moderate in an anonymous condition, and greatest in an exposure threat condition in which the experimenter could potentially view participants responses. This pattern was clearest for behaviors considered less acceptable for women than men (e.g., masturbation, exposure to hardcore & softcore erotica). Results suggest that some sex differences in self-reported sexual behavior reflect responses influenced by normative expectations for men and women.", "title": "" }, { "docid": "pos:1840248_1", "text": "During human evolutionary history, there were \"trade-offs\" between expending time and energy on child-rearing and mating, so both men and women evolved conditional mating strategies guided by cues signaling the circumstances. Many short-term matings might be successful for some men; others might try to find and keep a single mate, investing their effort in rearing her offspring. Recent evidence suggests that men with features signaling genetic benefits to offspring should be preferred by women as short-term mates, but there are trade-offs between a mate's genetic fitness and his willingness to help in child-rearing. It is these circumstances and the cues that signal them that underlie the variation in short- and long-term mating strategies between and within the sexes.", "title": "" } ]
[ { "docid": "neg:1840248_0", "text": "The safety and antifungal efficacy of amphotericin B lipid complex (ABLC) were evaluated in 556 cases of invasive fungal infection treated through an open-label, single-patient, emergency-use study of patients who were refractory to or intolerant of conventional antifungal therapy. All 556 treatment episodes were evaluable for safety. During the course of ABLC therapy, serum creatinine levels significantly decreased from baseline (P < .02). Among 162 patients with serum creatinine values > or = 2.5 mg/dL at the start of ABLC therapy (baseline), the mean serum creatinine value decreased significantly from the first week through the sixth week (P < or = .0003). Among the 291 mycologically confirmed cases evaluable for therapeutic response, there was a complete or partial response to ABLC in 167 (57%), including 42% (55) of 130 cases of aspergillosis, 67% (28) of 42 cases of disseminated candidiasis, 71% (17) of 24 cases of zygomycosis, and 82% (9) of 11 cases of fusariosis. Response rates varied according to the pattern of invasive fungal infection, underlying condition, and reason for enrollment (intolerance versus progressive infection). These findings support the use of ABLC in the treatment of invasive fungal infections in patients who are intolerant of or refractory to conventional antifungal therapy.", "title": "" }, { "docid": "neg:1840248_1", "text": "Nowadays terahertz spectroscopy is a well-established technique and recent progresses in technology demonstrated that this new technique is useful for both fundamental research and industrial applications. Varieties of applications such as imaging, non destructive testing, quality control are about to be transferred to industry supported by permanent improvements from basic research. Since chemometrics is today routinely applied to IR spectroscopy, we discuss in this paper the advantages of using chemometrics in the framework of terahertz spectroscopy. Different analytical procedures are illustrates. We conclude that advanced data processing is the key point to validate routine terahertz spectroscopy as a new reliable analytical technique.", "title": "" }, { "docid": "neg:1840248_2", "text": "The beautification of human photos usually requires professional editing softwares, which are difficult for most users. In this technical demonstration, we propose a deep face beautification framework, which is able to automatically modify the geometrical structure of a face so as to boost the attractiveness. A learning based approach is adopted to capture the underlying relations between the facial shape and the attractiveness via training the Deep Beauty Predictor (DBP). Relying on the pre-trained DBP, we construct the BeAuty SHaper (BASH) to infer the \"flows\" of landmarks towards the maximal aesthetic level. BASH modifies the facial landmarks with the direct guidance of the beauty score estimated by DBP.", "title": "" }, { "docid": "neg:1840248_3", "text": "There is intense interest in graphene in fields such as physics, chemistry, and materials science, among others. Interest in graphene's exceptional physical properties, chemical tunability, and potential for applications has generated thousands of publications and an accelerating pace of research, making review of such research timely. Here is an overview of the synthesis, properties, and applications of graphene and related materials (primarily, graphite oxide and its colloidal suspensions and materials made from them), from a materials science perspective.", "title": "" }, { "docid": "neg:1840248_4", "text": "Enfin, je ne peux terminer ces quelques lignes sans remercier ma famille qui dans l'ombre m'apporte depuis si longtemps tout le soutien qui m'est nécessaire, et spécialement, mes pensées vont à Magaly. Un peu d'histoire… Dès les années 60, les données informatisées dans les organisations ont pris une importance qui n'a cessé de croître. Les systèmes informatiques gérant ces données sont utilisés essentiellement pour faciliter l'activité quotidienne des organisations et pour soutenir les prises de décision. La démocratisation de la micro-informatique dans les années 80 a permis un important développement de ces systèmes augmentant considérablement les quantités de données 1 2 informatisées disponibles. Face aux évolutions nombreuses et rapides qui s'imposent aux organisations, la prise de décision est devenue dès les années 90 une activité primordiale nécessitant la mise en place de systèmes dédiés efficaces [Inmon, 1994]. A partir de ces années, les éditeurs de logiciels ont proposé des outils facilitant l'analyse des données pour soutenir les prises de décision. Les tableurs sont probablement les premiers outils qui ont été utilisés pour analyser les données à des fins décisionnelles. Ils ont été complétés par des outils facilitant l'accès aux données pour les décideurs au travers d'interfaces graphiques dédiées au « requêtage » ; citons le logiciel Business Objects qui reste aujourd'hui encore l'un des plus connus. Le développement de systèmes dédiés à la prise de décision a vu naître des outils E.T.L. (« Extract-Transform-Load ») destinés à faciliter l'extraction et la transformation de données décisionnelles. Dès la fin des années 90, les acteurs importants tels que Microsoft, Oracle, IBM, SAP sont intervenus sur ce nouveau marché en faisant évoluer leurs outils et en acquérant de nombreux logiciels spécialisés ; par exemple, SAP vient d'acquérir Business Objects pour 4,8 milliards d'euros. Ils disposent aujourd'hui d'offres complètes intégrant l'ensemble de la chaîne décisionnelle : E.T.L., stockage (S.G.B.D.), restitution et analyse. Cette dernière décennie connaît encore une évolution marquante avec le développement d'une offre issue du monde du logiciel libre (« open source ») qui atteint aujourd'hui une certaine maturité (Talend 3 , JPalo 4 , Jasper 5). Dominée par les outils du marché, l'informatique décisionnelle est depuis le milieu des années 90 un domaine investi par le monde de la recherche au travers des concepts d'entrepôt de données (« data warehouse ») [Widom, 1995] [Chaudhury, et al., 1997] et d'OLAP (« On-Line Analytical Processing ») [Codd, et al., 1993]. D'abord diffusées dans …", "title": "" }, { "docid": "neg:1840248_5", "text": "Monolingual dictionaries are widespread and semantically rich resources. This paper presents a simple model that learns to compute word embeddings by processing dictionary definitions and trying to reconstruct them. It exploits the inherent recursivity of dictionaries by encouraging consistency between the representations it uses as inputs and the representations it produces as outputs. The resulting embeddings are shown to capture semantic similarity better than regular distributional methods and other dictionary-based methods. In addition, the method shows strong performance when trained exclusively on dictionary data and generalizes in one shot.", "title": "" }, { "docid": "neg:1840248_6", "text": "We propose Quadruplet Convolutional Neural Networks (Quad-CNN) for multi-object tracking, which learn to associate object detections across frames using quadruplet losses. The proposed networks consider target appearances together with their temporal adjacencies for data association. Unlike conventional ranking losses, the quadruplet loss enforces an additional constraint that makes temporally adjacent detections more closely located than the ones with large temporal gaps. We also employ a multi-task loss to jointly learn object association and bounding box regression for better localization. The whole network is trained end-to-end. For tracking, the target association is performed by minimax label propagation using the metric learned from the proposed network. We evaluate performance of our multi-object tracking algorithm on public MOT Challenge datasets, and achieve outstanding results.", "title": "" }, { "docid": "neg:1840248_7", "text": "Convolutional Neural Network (CNN) was firstly introduced in Computer Vision for image recognition by LeCun et al. in 1989. Since then, it has been widely used in image recognition and classification tasks. The recent impressive success of Krizhevsky et al. in ILSVRC 2012 competition demonstrates the significant advance of modern deep CNN on image classification task. Inspired by his work, many recent research works have been concentrating on understanding CNN and extending its application to more conventional computer vision tasks. Their successes and lessons have promoted the development of both CNN and vision science. This article makes a survey of recent progress in CNN since 2012. We will introduce the general architecture of a modern CNN and make insights into several typical CNN incarnations which have been studied extensively. We will also review the efforts to understand CNNs and review important applications of CNNs in computer vision tasks.", "title": "" }, { "docid": "neg:1840248_8", "text": "Smart contracts are full-fledged programs that run on blockchains (e.g., Ethereum, one of the most popular blockchains). In Ethereum, gas (in Ether, a cryptographic currency like Bitcoin) is the execution fee compensating the computing resources of miners for running smart contracts. However, we find that under-optimized smart contracts cost more gas than necessary, and therefore the creators or users will be overcharged. In this work, we conduct the first investigation on Solidity, the recommended compiler, and reveal that it fails to optimize gas-costly programming patterns. In particular, we identify 7 gas-costly patterns and group them to 2 categories. Then, we propose and develop GASPER, a new tool for automatically locating gas-costly patterns by analyzing smart contracts' bytecodes. The preliminary results on discovering 3 representative patterns from 4,240 real smart contracts show that 93.5%, 90.1% and 80% contracts suffer from these 3 patterns, respectively.", "title": "" }, { "docid": "neg:1840248_9", "text": "String similarity join is an important operation in data integration and cleansing that finds similar string pairs from two collections of strings. More than ten algorithms have been proposed to address this problem in the recent two decades. However, existing algorithms have not been thoroughly compared under the same experimental framework. For example, some algorithms are tested only on specific datasets. This makes it rather difficult for practitioners to decide which algorithms should be used for various scenarios. To address this problem, in this paper we provide a comprehensive survey on a wide spectrum of existing string similarity join algorithms, classify them into different categories based on their main techniques, and compare them through extensive experiments on a variety of real-world datasets with different characteristics. We also report comprehensive findings obtained from the experiments and provide new insights about the strengths and weaknesses of existing similarity join algorithms which can guide practitioners to select appropriate algorithms for various scenarios.", "title": "" }, { "docid": "neg:1840248_10", "text": "The volume of data is growing at an increasing rate. This growth is both in size and in connectivity, where connectivity refers to the increasing presence of relationships between data. Social networks such as Facebook and Twitter store and process petabytes of data each day. Graph databases have gained renewed interest in the last years, due to their applications in areas such as the Semantic Web and Social Network Analysis. Graph databases provide an effective and efficient solution to data storage and querying data in these scenarios, where data is rich in relationships. In this paper, it is analyzed the fundamental points of graph databases, showing their main characteristics and advantages. We study Neo4j, the top graph database software in the market and evaluate its performance using the Social Network Benchmark (SNB).", "title": "" }, { "docid": "neg:1840248_11", "text": "Existing automatic music generation approaches that feature deep learning can be broadly classified into two types: raw audio models and symbolic models. Symbolic models, which train and generate at the note level, are currently the more prevalent approach; these models can capture long-range dependencies of melodic structure, but fail to grasp the nuances and richness of raw audio generations. Raw audio models, such as DeepMind’s WaveNet, train directly on sampled audio waveforms, allowing them to produce realistic-sounding, albeit unstructured music. In this paper, we propose an automatic music generation methodology combining both of these approaches to create structured, realistic-sounding compositions. We consider a Long Short Term Memory network to learn the melodic structure of different styles of music, and then use the unique symbolic generations from this model as a conditioning input to a WaveNet-based raw audio generator, creating a model for automatic, novel music. We then evaluate this approach by showcasing results of this work.", "title": "" }, { "docid": "neg:1840248_12", "text": "Various aspects of the theory of random walks on graphs are surveyed. In particular, estimates on the important parameters of access time, commute time, cover time and mixing time are discussed. Connections with the eigenvalues of graphs and with electrical networks, and the use of these connections in the study of random walks is described. We also sketch recent algorithmic applications of random walks, in particular to the problem of sampling.", "title": "" }, { "docid": "neg:1840248_13", "text": "Text mining is the use of automated methods for exploiting the enormous amount of knowledge available in the biomedical literature. There are at least as many motivations for doing text mining work as there are types of bioscientists. Model organism database curators have been heavy participants in the development of the field due to their need to process large numbers of publications in order to populate the many data fields for every gene in their species of interest. Bench scientists have built biomedical text mining applications to aid in the development of tools for interpreting the output of high-throughput assays and to improve searches of sequence databases (see [1] for a review). Bioscientists of every stripe have built applications to deal with the dual issues of the doubleexponential growth in the scientific literature over the past few years and of the unique issues in searching PubMed/ MEDLINE for genomics-related publications. A surprising phenomenon can be noted in the recent history of biomedical text mining: although several systems have been built and deployed in the past few years—Chilibot, Textpresso, and PreBIND (see Text S1 for these and most other citations), for example—the ones that are seeing high usage rates and are making productive contributions to the working lives of bioscientists have been built not by text mining specialists, but by bioscientists. We speculate on why this might be so below. Three basic types of approaches to text mining have been prevalent in the biomedical domain. Co-occurrence– based methods do no more than look for concepts that occur in the same unit of text—typically a sentence, but sometimes as large as an abstract—and posit a relationship between them. (See [2] for an early co-occurrence–based system.) For example, if such a system saw that BRCA1 and breast cancer occurred in the same sentence, it might assume a relationship between breast cancer and the BRCA1 gene. Some early biomedical text mining systems were co-occurrence–based, but such systems are highly error prone, and are not commonly built today. In fact, many text mining practitioners would not consider them to be text mining systems at all. Co-occurrence of concepts in a text is sometimes used as a simple baseline when evaluating more sophisticated systems; as such, they are nontrivial, since even a co-occurrence– based system must deal with variability in the ways that concepts are expressed in human-produced texts. For example, BRCA1 could be referred to by any of its alternate symbols—IRIS, PSCP, BRCAI, BRCC1, or RNF53 (or by any of their many spelling variants, which include BRCA1, BRCA-1, and BRCA 1)— or by any of the variants of its full name, viz. breast cancer 1, early onset (its official name per Entrez Gene and the Human Gene Nomenclature Committee), as breast cancer susceptibility gene 1, or as the latter’s variant breast cancer susceptibility gene-1. Similarly, breast cancer could be referred to as breast cancer, carcinoma of the breast, or mammary neoplasm. These variability issues challenge more sophisticated systems, as well; we discuss ways of coping with them in Text S1. Two more common (and more sophisticated) approaches to text mining exist: rule-based or knowledgebased approaches, and statistical or machine-learning-based approaches. The variety of types of rule-based systems is quite wide. In general, rulebased systems make use of some sort of knowledge. This might take the form of general knowledge about how language is structured, specific knowledge about how biologically relevant facts are stated in the biomedical literature, knowledge about the sets of things that bioscientists talk about and the kinds of relationships that they can have with one another, and the variant forms by which they might be mentioned in the literature, or any subset or combination of these. (See [3] for an early rule-based system, and [4] for a discussion of rule-based approaches to various biomedical text mining tasks.) At one end of the spectrum, a simple rule-based system might use hardcoded patterns—for example, ,gene. plays a role in ,disease. or ,disease. is associated with ,gene.—to find explicit statements about the classes of things in which the researcher is interested. At the other end of the spectrum, a rulebased system might use sophisticated linguistic and semantic analyses to recognize a wide range of possible ways of making assertions about those classes of things. It is worth noting that useful systems have been built using technologies at both ends of the spectrum, and at many points in between. In contrast, statistical or machine-learning–based systems operate by building classifiers that may operate on any level, from labelling part of speech to choosing syntactic parse trees to classifying full sentences or documents. (See [5] for an early learning-based system, and [4] for a discussion of learning-based approaches to various biomedical text mining tasks.) Rule-based and statistical systems each have their advantages and", "title": "" }, { "docid": "neg:1840248_14", "text": "After a knee joint surgery, due to severe pain and immobility of the patient, the tissue around the knee become harder and knee stiffness will occur, which may causes many problems such as scar tissue swelling, bleeding, and fibrosis. A CPM (Continuous Passive Motion) machine is an apparatus that is being used to patient recovery, retrieving moving abilities of the knee, and reducing tissue swelling, after the knee joint surgery. This device prevents frozen joint syndrome (adhesive capsulitis), joint stiffness, and articular cartilage destruction by stimulating joint tissues, and flowing synovial fluid and blood around the knee joint. In this study, a new, light, and portable CPM machine with an appropriate interface, is designed and manufactured. The knee joint can be rotated from the range of -15° to 120° with a pace of 0.1 degree/sec to 1 degree/sec by this machine. One of the most important advantages of this new machine is its own user-friendly interface. This apparatus is controlled via an Android-based application; therefore, the users can use this machine easily via their own smartphones without the necessity to an extra controlling device. Besides, because of its apt size, this machine is a portable device. Smooth movement without any vibration and adjusting capability for different anatomies are other merits of this new CPM machine.", "title": "" }, { "docid": "neg:1840248_15", "text": "Accurate knowledge on the absolute or true speed of a vehicle, if and when available, can be used to enhance advanced vehicle dynamics control systems such as anti-lock brake systems (ABS) and auto-traction systems (ATS) control schemes. Current conventional method uses wheel speed measurements to estimate the speed of the vehicle. As a result, indication of the vehicle speed becomes erroneous and, thus, unreliable when large slips occur between the wheels and terrain. This paper describes a fuzzy rule-based Kalman filtering technique which employs an additional accelerometer to complement the wheel-based speed sensor, and produce an accurate estimation of the true speed of a vehicle. We use the Kalman filters to deal with the noise and uncertainties in the speed and acceleration models, and fuzzy logic to tune the covariances and reset the initialization of the filter according to slip conditions detected and measurement-estimation condition. Experiments were conducted using an actual vehicle to verify the proposed strategy. Application of the fuzzy logic rule-based Kalman filter shows that accurate estimates of the absolute speed can be achieved euen under sagnapcant brakang skzd and traction slip conditions.", "title": "" }, { "docid": "neg:1840248_16", "text": "Vulnerabilities in web applications allow malicious users to obtain unrestricted access to private and confidential information. SQL injection attacks rank at the top of the list of threats directed at any database-driven application written for the Web. An attacker can take advantages of web application programming security flaws and pass unexpected malicious SQL statements through a web application for execution by the back-end database. This paper proposes a novel specification-based methodology for the detection of exploitations of SQL injection vulnerabilities. The new approach on the one hand utilizes specifications that define the intended syntactic structure of SQL queries that are produced and executed by the web application and on the other hand monitors the application for executing queries that are in violation of the specification.\n The three most important advantages of the new approach against existing analogous mechanisms are that, first, it prevents all forms of SQL injection attacks; second, its effectiveness is independent of any particular target system, application environment, or DBMS; and, third, there is no need to modify the source code of existing web applications to apply the new protection scheme to them.\n We developed a prototype SQL injection detection system (SQL-IDS) that implements the proposed algorithm. The system monitors Java-based applications and detects SQL injection attacks in real time. We report some preliminary experimental results over several SQL injection attacks that show that the proposed query-specific detection allows the system to perform focused analysis at negligible computational overhead without producing false positives or false negatives. Therefore, the new approach is very efficient in practice.", "title": "" }, { "docid": "neg:1840248_17", "text": "Human Activity recognition (HAR) is an important area of research in ubiquitous computing and Human Computer Interaction. To recognize activities using mobile or wearable sensor, data are collected using appropriate sensors, segmented, needed features extracted and activities categories using discriminative models (SVM, HMM, MLP etc.). Feature extraction is an important stage as it helps to reduce computation time and ensure enhanced recognition accuracy. Earlier researches have used statistical features which require domain expert and handcrafted features. However, the advent of deep learning that extracts salient features from raw sensor data and has provided high performance in computer vision, speech and image recognition. Based on the recent advances recorded in deep learning for human activity recognition, we briefly reviewed the different deep learning methods for human activities implemented recently and then propose a conceptual deep learning frameworks that can be used to extract global features that model the temporal dependencies using Gated Recurrent Units. The framework when implemented would comprise of seven convolutional layer, two Gated recurrent unit and Support Vector Machine (SVM) layer for classification of activity details. The proposed technique is still under development and will be evaluated with benchmarked datasets and compared with other baseline deep learning algorithms.", "title": "" }, { "docid": "neg:1840248_18", "text": "In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e.g., pedestrian, bicycle, and tree) and is properly characterized by multiple visual features (e.g., color, texture, and shape). Currently, available tools ignore either the label relationship or the view complementarily. Motivated by the success of the vector-valued function that constructs matrix-valued kernels to explore the multilabel structure in the output space, we introduce multiview vector-valued manifold regularization (MV3MR) to integrate multiple features. MV3MR exploits the complementary property of different features and discovers the intrinsic local geometry of the compact support shared by different features under the theme of manifold regularization. We conduct extensive experiments on two challenging, but popular, datasets, PASCAL VOC' 07 and MIR Flickr, and validate the effectiveness of the proposed MV3MR for image classification.", "title": "" } ]
1840249
Real-Time Color Coded Object Detection Using a Modular Computer Vision Library
[ { "docid": "pos:1840249_0", "text": "The core problem addressed in this article is the 3D position detection of a spherical object of knownradius in a single image frame, obtained by a dioptric vision system consisting of only one fisheye lens camera that follows equidistant projection model. The central contribution is a bijection principle between a known-radius spherical object’s 3D world position and its 2D projected image curve, that we prove, thus establishing that for every possible 3D world position of the spherical object, there exists a unique curve on the image plane if the object is projected through a fisheye lens that follows equidistant projection model. Additionally, we present a setup for the experimental verification of the principle’s correctness. In previously published works we have applied this principle to detect and subsequently track a known-radius spherical object. 2014 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "pos:1840249_1", "text": "This paper describes a complete and efficient vision system d eveloped for the robotic soccer team of the University of Aveiro, CAMB ADA (Cooperative Autonomous Mobile roBots with Advanced Distributed Ar chitecture). The system consists on a firewire camera mounted vertically on th e top of the robots. A hyperbolic mirror placed above the camera reflects the 360 d egrees of the field around the robot. The omnidirectional system is used to find t he ball, the goals, detect the presence of obstacles and the white lines, used by our localization algorithm. In this paper we present a set of algorithms to extract efficiently the color information of the acquired images and, in a second phase, ex tract the information of all objects of interest. Our vision system architect ure uses a distributed paradigm where the main tasks, namely image acquisition, co lor extraction, object detection and image visualization, are separated in se veral processes that can run at the same time. We developed an efficient color extracti on algorithm based on lookup tables and a radial model for object detection. Our participation in the last national robotic contest, ROBOTICA 2007, where we have obtained the first place in the Medium Size League of robotic soccer, shows the e ffectiveness of our algorithms. Moreover, our experiments show that the sys tem is fast and accurate having a maximum processing time independently of the r obot position and the number of objects found in the field.", "title": "" } ]
[ { "docid": "neg:1840249_0", "text": "This conceptual paper in sustainable business research introduces a business sustainability maturity model as an innovative solution to support companies move towards sustainable development. Such model offers the possibility for each firm to individually assess its position regarding five sustainability maturity levels and, as a consequence, build a tailored as well as a common strategy along its network of relationships and influence to progress towards higher levels of sustainable development. The maturity model suggested is based on the belief that business sustainability is a continuous process of evolution in which a company will be continuously seeking to achieve its vision of sustainable development in uninterrupted cycles of improvement, where at each new cycle the firm starts the process at a higher level of business sustainability performance. The referred model is therefore dynamic to incorporate changes along the way and enable its own evolution following the firm’s and its network partners’ progress towards the sustainability vision. The research on which this paper is based combines expertise in science and technology policy, R&D and innovation management, team performance and organisational learning, strategy alignment and integrated business performance, knowledge management and technology foresighting.", "title": "" }, { "docid": "neg:1840249_1", "text": "We propose drl-RPN, a deep reinforcement learning-based visual recognition model consisting of a sequential region proposal network (RPN) and an object detector. In contrast to typical RPNs, where candidate object regions (RoIs) are selected greedily via class-agnostic NMS, drl-RPN optimizes an objective closer to the final detection task. This is achieved by replacing the greedy RoI selection process with a sequential attention mechanism which is trained via deep reinforcement learning (RL). Our model is capable of accumulating class-specific evidence over time, potentially affecting subsequent proposals and classification scores, and we show that such context integration significantly boosts detection accuracy. Moreover, drl-RPN automatically decides when to stop the search process and has the benefit of being able to jointly learn the parameters of the policy and the detector, both represented as deep networks. Our model can further learn to search over a wide range of exploration-accuracy trade-offs making it possible to specify or adapt the exploration extent at test time. The resulting search trajectories are image- and category-dependent, yet rely only on a single policy over all object categories. Results on the MS COCO and PASCAL VOC challenges show that our approach outperforms established, typical state-of-the-art object detection pipelines.", "title": "" }, { "docid": "neg:1840249_2", "text": "In this paper, a comparison between existing state of the art multilevel inverter topologies is performed. The topologies examined are the neutral point clamp multilevel inverter (NPCMLI) or diode-clamped multilevel inverter (DCMLI), the flying capacitor multilevel inverter (FCMLI) and the cascaded cell multilevel inverter (CCMLI). The comparison of these inverters is based on the criteria of output voltage quality (Peak value of the fundamental and dominant harmonic components and THD), power circuitry complexity, and implementation cost. The comparison results are based on theoretical results verified by detailed simulation results.", "title": "" }, { "docid": "neg:1840249_3", "text": "This paper presents a new method of extracting LF model based parameters using a spectral model matching approach. Strategies are described for overcoming some of the known difficulties of this type of approach, in particular high frequency noise. The new method performed well compared to a typical time based method particularly in terms of robustness against distortions introduced by the recording system and in terms of the ability of parameters extracted in this manner to differentiate three discrete voice qualities. Results from this study are very promising for the new method and offer a way of extracting a set of non-redundant spectral parameters that may be very useful in both recognition and synthesis systems.", "title": "" }, { "docid": "neg:1840249_4", "text": "This paper describes a student project examining mechanisms with which to attack Bluetooth-enabled devices. The paper briefly describes the protocol architecture of Bluetooth and the Java interface that programmers can use to connect to Bluetooth communication services. Several types of attacks are described, along with a detailed example of two attack tools, Bloover II and BT Info.", "title": "" }, { "docid": "neg:1840249_5", "text": "In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients - manually annotated by up to four raters - and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.", "title": "" }, { "docid": "neg:1840249_6", "text": "An Intrusion Detection System (IDS) is a software that monitors a single or a network of computers for malicious activities (attacks) that are aimed at stealing or censoring information or corrupting network protocols. Most techniques used in today’s IDS are not able to deal with the dynamic and complex nature of cyber attacks on computer networks. Hence, efficient adaptive methods like various techniques of machine learning can result in higher detection rates, lower false alarm rates and reasonable computation and communication costs. In this paper, we study several such schemes and compare their performance. We divide the schemes into methods based on classical artificial intelligence (AI) and methods based on computational intelligence (CI). We explain how various characteristics of CI techniques can be used to build efficient IDS.", "title": "" }, { "docid": "neg:1840249_7", "text": "Multi-head attention is appealing for the ability to jointly attend to information from different representation subspaces at different positions. In this work, we introduce a disagreement regularization to explicitly encourage the diversity among multiple attention heads. Specifically, we propose three types of disagreement regularization, which respectively encourage the subspace, the attended positions, and the output representation associated with each attention head to be different from other heads. Experimental results on widely-used WMT14 English⇒German and WMT17 Chinese⇒English translation tasks demonstrate the effectiveness and universality of the proposed approach.", "title": "" }, { "docid": "neg:1840249_8", "text": "A fall monitor system is necessary to reduce the rate of fall fatalities in elderly people. As an accelerometer has been smaller and inexpensive, it has been becoming widely used in motion detection fields. This paper proposes the falling detection algorithm based on back propagation neural network to detect the fall of elderly people. In the experiment, a tri-axial accelerometer was attached to waists of five healthy and young people. In order to evaluate the performance of the fall detection, five young people were asked to simulate four daily-life activities and four falls; walking, jumping, flopping on bed, rising from bed, front fall, back fall, left fall and right fall. The experimental results show that the proposed algorithm can potentially distinguish the falling activities from the other daily-life activities.", "title": "" }, { "docid": "neg:1840249_9", "text": "A standard set of variables is extracted from a set of studies with different perspectives and different findings involving learning aids in the classroom. The variables are then used to analyze the studies in order to draw conclusions about learning aids in general and manipulatives in particular.", "title": "" }, { "docid": "neg:1840249_10", "text": "While most steps in the modern object detection methods are learnable, the region feature extraction step remains largely handcrafted, featured by RoI pooling methods. This work proposes a general viewpoint that unifies existing region feature extraction methods and a novel method that is end-to-end learnable. The proposed method removes most heuristic choices and outperforms its RoI pooling counterparts. It moves further towards fully learnable object detection.", "title": "" }, { "docid": "neg:1840249_11", "text": "This paper seeks a simple, cost effective and compact gate drive circuit for bi-directional switch of matrix converter. Principals of IGBT commutation and bi-directional switch commutation in matrix converters are reviewed. Three simple IGBT gate drive circuits are presented and simulated in PSpice and simulation results are approved by experiments in the end of this paper. Paper concludes with comparative numbers of gate drive costs.", "title": "" }, { "docid": "neg:1840249_12", "text": "Attempting to locate and quantify material on the Web that is hidden from typical search techniques.", "title": "" }, { "docid": "neg:1840249_13", "text": "The tasks in fine-grained opinion mining can be regarded as either a token-level sequence labeling problem or as a semantic compositional task. We propose a general class of discriminative models based on recurrent neural networks (RNNs) and word embeddings that can be successfully applied to such tasks without any taskspecific feature engineering effort. Our experimental results on the task of opinion target identification show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based models. Our framework is flexible, allows us to incorporate other linguistic features, and achieves results that rival the top performing systems in SemEval-2014.", "title": "" }, { "docid": "neg:1840249_14", "text": "As a prospective candidate material for surface coating and repair applications, nickel-based superalloy Inconel 718 (IN718) was deposited on American Iron and Steel Institute (AISI) 4140 alloy steel substrate by laser engineered net shaping (LENS) to investigate the compatibility between two dissimilar materials with a focus on interface bonding and fracture behavior of the hybrid specimens. The results show that the interface between the two dissimilar materials exhibits good metallurgical bonding. Through the tensile test, all the fractures occurred in the as-deposited IN718 section rather than the interface or the substrate, implying that the as-deposited interlayer bond strength is weaker than the interfacial bond strength. From the fractography using scanning electron microscopy (SEM) and energy disperse X-ray spectrometry (EDS), three major factors affecting the tensile fracture failure of the as-deposited part are (i) metallurgical defects such as incompletely melted powder particles, lack-of-fusion porosity, and micropores; (ii) elemental segregation and Laves phase, and (iii) oxide formation. The fracture failure mechanism is a combination of all these factors which are detrimental to the mechanical properties and structural integrity by causing premature fracture failure of the as-deposited IN718.", "title": "" }, { "docid": "neg:1840249_15", "text": "In this paper we introduce a real-time system for action detection. The system uses a small set of robust features extracted from 3D skeleton data. Features are effectively described based on the probability distribution of skeleton data. The descriptor computes a pyramid of sample covariance matrices and mean vectors to encode the relationship between the features. For handling the intra-class variations of actions, such as action temporal scale variations, the descriptor is computed using different window scales for each action. Discriminative elements of the descriptor are mined using feature selection. The system achieves accurate detection results on difficult unsegmented sequences. Experiments on MSRC-12 and G3D datasets show that the proposed system outperforms the state-of-the-art in detection accuracy with very low latency. To the best of our knowledge, we are the first to propose using multi-scale description in action detection from 3D skeleton data.", "title": "" }, { "docid": "neg:1840249_16", "text": "A content-based image retrieval (CBIR) framework for diverse collection of medical images of different imaging modalities, anatomic regions with different orientations and biological systems is proposed. Organization of images in such a database (DB) is well defined with predefined semantic categories; hence, it can be useful for category-specific searching. The proposed framework consists of machine learning methods for image prefiltering, similarity matching using statistical distance measures, and a relevance feedback (RF) scheme. To narrow down the semantic gap and increase the retrieval efficiency, we investigate both supervised and unsupervised learning techniques to associate low-level global image features (e.g., color, texture, and edge) in the projected PCA-based eigenspace with their high-level semantic and visual categories. Specially, we explore the use of a probabilistic multiclass support vector machine (SVM) and fuzzy c-mean (FCM) clustering for categorization and prefiltering of images to reduce the search space. A category-specific statistical similarity matching is proposed in a finer level on the prefiltered images. To incorporate a better perception subjectivity, an RF mechanism is also added to update the query parameters dynamically and adjust the proposed matching functions. Experiments are based on a ground-truth DB consisting of 5000 diverse medical images of 20 predefined categories. Analysis of results based on cross-validation (CV) accuracy and precision-recall for image categorization and retrieval is reported. It demonstrates the improvement, effectiveness, and efficiency achieved by the proposed framework", "title": "" }, { "docid": "neg:1840249_17", "text": "Self-modifying code (SMC) is widely used in obfuscated program for enhancing the difficulty in reverse engineering. The typical mode of self-modifying code is restore-execute-hide, it drives program to conceal real behaviors at most of the time, and only under actual running will the real code be restored and executed. In order to locate the SMC and further recover the original logic of code for guiding program analysis, dynamic self-modifying code detecting method based on backward analysis is proposed. Our method first extracts execution trace such as instructions and status through dynamic analysis. Then we maintain a memory set to store the memory address of execution instructions, the memory set will update dynamically while backward searching the trace, and simultaneously will we check the memory write address to match with current memory set in order to identify the mode \"modify then execute\". By means of validating self-modifying code which is identified via above procedures, we can easily deobfuscate the program which use self-modifying code and achieve its original logic. A prototype that can be applied in self-modifying code detection is designed and implemented. The evaluation results show our method can trace the execution of program effectively, and can reduce the consumption in time and space.", "title": "" }, { "docid": "neg:1840249_18", "text": "Hadoop MapReduce is the most popular open-source parallel programming model extensively used in Big Data analytics. Although fault tolerance and platform independence make Hadoop MapReduce the most popular choice for many users, it still has huge performance improvement potentials. Recently, RDMA-based design of Hadoop MapReduce has alleviated major performance bottlenecks with the implementation of many novel design features such as in-memory merge, prefetching and caching of map outputs, and overlapping of merge and reduce phases. Although these features reduce the overall execution time for MapReduce jobs compared to the default framework, further improvement is possible if shuffle and merge phases can also be overlapped with the map phase during job execution. In this paper, we propose HOMR (a Hybrid approach to exploit maximum Overlapping in MapReduce), that incorporates not only the features implemented in RDMA-based design, but also exploits maximum possible overlapping among all different phases compared to current best approaches. Our solution introduces two key concepts: Greedy Shuffle Algorithm and On-demand Shuffle Adjustment, both of which are essential to achieve significant performance benefits over the default MapReduce framework. Architecture of HOMR is generalized enough to provide performance efficiency both over different Sockets interface as well as previous RDMA-based designs over InfiniBand. Performance evaluations show that HOMR with RDMA over InfiniBand can achieve performance benefits of 54% and 56% compared to default Hadoop over IPoIB (IP over InfiniBand) and 10GigE, respectively. Compared to the previous best RDMA-based designs, this benefit is 29%. HOMR over Sockets also achieves a maximum of 38-40% benefit compared to default Hadoop over Sockets interface. We also evaluate our design with real-world workloads like SWIM and PUMA, and observe benefits of up to 16% and 18%, respectively, over the previous best-case RDMA-based design. To the best of our knowledge, this is the first approach to achieve maximum possible overlapping for MapReduce framework.", "title": "" } ]
1840250
On brewing fresh espresso: LinkedIn's distributed data serving platform
[ { "docid": "pos:1840250_0", "text": "Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database. It provides strong transactional semantics, consistent replication, and high performance reads and writes for a variety of Google's applications. I'll discuss the design and implementation of Spanner, as well as some of the lessons we have learned along the way. I'll also discuss some open challenges that we still see in building scalable distributed storage systems.", "title": "" } ]
[ { "docid": "neg:1840250_0", "text": "Low-rank decomposition plays a central role in accelerating convolutional neural network (CNN), and the rank of decomposed kernel-tensor is a key parameter that determines the complexity and accuracy of a neural network. In this paper, we define rank selection as a combinatorial optimization problem and propose a methodology to minimize network complexity while maintaining the desired accuracy. Combinatorial optimization is not feasible due to search space limitations. To restrict the search space and obtain the optimal rank, we define the space constraint parameters with a boundary condition. We also propose a linearly-approximated accuracy function to predict the fine-tuned accuracy of the optimized CNN model during the cost reduction. Experimental results on AlexNet and VGG-16 show that the proposed rank selection algorithm satisfies the accuracy constraint. Our method combined with truncated-SVD outperforms state-of-the-art methods in terms of inference and training time at almost the same accuracy.", "title": "" }, { "docid": "neg:1840250_1", "text": "This paper describes a model to address the task of named-entity recognition on Indonesian microblog messages due to its usefulness for higher-level tasks or text mining applications on Indonesian microblogs. We view our task as a sequence labeling problem using machine learning approach. We also propose various word-level and orthographic features, including the ones that are specific to the Indonesian language. Finally, in our experiment, we compared our model with a baseline model previously proposed for Indonesian formal documents, instead of microblog messages. Our contribution is two-fold: (1) we developed NER tool for Indonesian microblog messages, which was never addressed before, (2) we developed NER corpus containing around 600 Indonesian microblog messages available for future development.", "title": "" }, { "docid": "neg:1840250_2", "text": "In this letter, a broadband planar substrate truncated tapered microstrip line-to-dielectric image line transition on a single substrate is proposed. The design uses substrate truncated microstrip line which helps to minimize the losses due to the surface wave generation on thick microstrip line. Generalized empirical equations are proposed for the transition design and validated for different dielectric constants in millimeter-wave frequency band. Full-wave simulations are carried out using high frequency structural simulator. A back-to-back transition prototype of Ku-band is fabricated and measured. The measured return loss for 80-mm-long structure is better than 10 dB and the insertion loss is better than 2.5 dB in entire Ku-band (40% impedance bandwidth).", "title": "" }, { "docid": "neg:1840250_3", "text": "Cortical surface mapping has been widely used to compensate for individual variability of cortical shape and topology in anatomical and functional studies. While many surface mapping methods were proposed based on landmarks, curves, spherical or native cortical coordinates, few studies have extensively and quantitatively evaluated surface mapping methods across different methodologies. In this study we compared five cortical surface mapping algorithms, including large deformation diffeomorphic metric mapping (LDDMM) for curves (LDDMM-curve), for surfaces (LDDMM-surface), multi-manifold LDDMM (MM-LDDMM), FreeSurfer, and CARET, using 40 MRI scans and 10 simulated datasets. We computed curve variation errors and surface alignment consistency for assessing the mapping accuracy of local cortical features (e.g., gyral/sulcal curves and sulcal regions) and the curvature correlation for measuring the mapping accuracy in terms of overall cortical shape. In addition, the simulated datasets facilitated the investigation of mapping error distribution over the cortical surface when the MM-LDDMM, FreeSurfer, and CARET mapping algorithms were applied. Our results revealed that the LDDMM-curve, MM-LDDMM, and CARET approaches best aligned the local curve features with their own curves. The MM-LDDMM approach was also found to be the best in aligning the local regions and cortical folding patterns (e.g., curvature) as compared to the other mapping approaches. The simulation experiment showed that the MM-LDDMM mapping yielded less local and global deformation errors than the CARET and FreeSurfer mappings.", "title": "" }, { "docid": "neg:1840250_4", "text": "Personal robots will contribute mobile manipulation capabilities to our future smart homes. In this paper, we propose a low-cost object localization system that uses static devices with Bluetooth capabilities, which are distributed in an environment, to detect and localize active Bluetooth beacons and mobile devices. This system can be used by a robot to coarsely localize objects in retrieval tasks. We attach small Bluetooth low energy tags to objects and require at least four static Bluetooth receivers. While commodity Bluetooth devices could be used, we have built low-cost receivers from Raspberry Pi computers. The location of a tag is estimated by lateration of its received signal strengths. In experiments, we evaluate accuracy and timing of our approach, and report on the successful demonstration at the RoboCup German Open 2014 competition in Magdeburg.", "title": "" }, { "docid": "neg:1840250_5", "text": "Introduction Dissolution testing is routinely carried out in the pharmaceutical industry to determine the rate of dissolution of solid dosage forms. In addition to being a regulatory requirement, in-vitro dissolution testing is used to assist with formulation design, process development, and the demonstration of batch-to-batch reproducibility in production. The most common of such dissolution test apparatuses is the USP Dissolution Test Apparatus II, consisting of an unbaffled vessel stirred by a paddle, whose dimensions, characteristics, and operating conditions are detailed by the USP (Cohen et al., 1990; The United States Pharmacopeia & The National Formulary, 2004).", "title": "" }, { "docid": "neg:1840250_6", "text": "It is difficult to train a personalized task-oriented dialogue system because the data collected from each individual is often insufficient. Personalized dialogue systems trained on a small dataset can overfit and make it difficult to adapt to different user needs. One way to solve this problem is to consider a collection of multiple users’ data as a source domain and an individual user’s data as a target domain, and to perform a transfer learning from the source to the target domain. By following this idea, we propose the “PETAL” (PErsonalized Task-oriented diALogue), a transfer learning framework based on POMDP to learn a personalized dialogue system. The system first learns common dialogue knowledge from the source domain and then adapts this knowledge to the target user. This framework can avoid the negative transfer problem by considering differences between source and target users. The policy in the personalized POMDP can learn to choose different actions appropriately for different users. Experimental results on a real-world coffee-shopping data and simulation data show that our personalized dialogue system can choose different optimal actions for different users, and thus effectively improve the dialogue quality under the personalized setting.", "title": "" }, { "docid": "neg:1840250_7", "text": "The purpose of this study is to analyze factors affecting on online shopping behavior of consumers that might be one of the most important issues of e-commerce and marketing field. However, there is very limited knowledge about online consumer behavior because it is a complicated socio-technical phenomenon and involves too many factors. One of the objectives of this study is covering the shortcomings of previous studies that didn't examine main factors that influence on online shopping behavior. This goal has been followed by using a model examining the impact of perceived risks, infrastructural variables and return policy on attitude toward online shopping behavior and subjective norms, perceived behavioral control, domain specific innovativeness and attitude on online shopping behavior as the hypotheses of study. To investigate these hypotheses 200 questionnaires dispersed among online stores of Iran. Respondents to the questionnaire were consumers of online stores in Iran which randomly selected. Finally regression analysis was used on data in order to test hypothesizes of study. This study can be considered as an applied research from purpose perspective and descriptive-survey with regard to the nature and method (type of correlation). The study identified that financial risks and non-delivery risk negatively affected attitude toward online shopping. Results also indicated that domain specific innovativeness and subjective norms positively affect online shopping behavior. Furthermore, attitude toward online shopping positively affected online shopping behavior of consumers.", "title": "" }, { "docid": "neg:1840250_8", "text": "Sleep problems bave become epidemic aod traditional research has discovered many causes of poor sleep. The purpose of this study was to complement existiog research by using a salutogenic or health origins framework to investigate the correlates of good sleep. The aoalysis for this study used the National College Health Assessment data that included 54,111 participaots at 71 institutions. Participaots were raodomly selected or were in raodomly selected classrooms. Results of these aoalyses indicated that males aod females who reported \"good sleep\" were more likely to have engaged regularly in physical activity, felt less exhausted, were more likely to have a healthy Body Mass Index (BMI), aod also performed better academically. In addition, good male sleepers experienced less anxietY aod had less back pain. Good female sleepers also had fewer abusive relationships aod fewer broken bones, were more likely to have been nonsmokers aod were not binge drinkers. Despite the limitations of this exploratory study, these results are compelling, however they suggest the need for future research to clarify the identified relationships.", "title": "" }, { "docid": "neg:1840250_9", "text": "Managing models requires extracting information from them and modifying them, and this is performed through queries. Queries can be executed at the model or at the persistence-level. Both are complementary but while model-level queries are closer to modelling engineers, persistence-level queries are specific to the persistence technology and leverage its capabilities. This paper presents MQT, an approach that translates EOL (model-level queries) to SQL (persistence-level queries) at runtime. Runtime translation provides several benefits: (i) queries are executed only when the information is required; (ii) context and metamodel information is used to get more performant translated queries; and (iii) supports translating query programs using variables and dependant queries. Translation process used by MQT is described through two examples and we also evaluate performance of the approach.", "title": "" }, { "docid": "neg:1840250_10", "text": "We present a statistical mechanics model of deep feed forward neural networks (FFN). Our energy-based approach naturally explains several known results and heuristics, providing a solid theoretical framework and new instruments for a systematic development of FFN. We infer that FFN can be understood as performing three basic steps: encoding, representation validation and propagation. We obtain a set of natural activations – such as sigmoid, tanh and ReLu – together with a state-of-the-art one, recently obtained by Ramachandran et al. [1] using an extensive search algorithm. We term this activation ESP (Expected Signal Propagation), explain its probabilistic meaning, and study the eigenvalue spectrum of the associated Hessian on classification tasks. We find that ESP allows for faster training and more consistent performances over a wide range of network architectures.", "title": "" }, { "docid": "neg:1840250_11", "text": "PURPOSE OF REVIEW\nOdontogenic causes of sinusitis are frequently missed; clinicians often overlook odontogenic disease whenever examining individuals with symptomatic rhinosinusitis. Conventional treatments for chronic rhinosinusitis (CRS) will often fail in odontogenic sinusitis. There have been several recent developments in the understanding of mechanisms, diagnosis, and treatment of odontogenic sinusitis, and clinicians should be aware of these advances to best treat this patient population.\n\n\nRECENT FINDINGS\nThe majority of odontogenic disease is caused by periodontitis and iatrogenesis. Notably, dental pain or dental hypersensitivity is very commonly absent in odontogenic sinusitis, and symptoms are very similar to those seen in CRS overall. Unilaterality of nasal obstruction and foul nasal drainage are most suggestive of odontogenic sinusitis, but computed tomography is the gold standard for diagnosis. Conventional panoramic radiographs are very poorly suited to rule out odontogenic sinusitis, and cannot be relied on to identify disease. There does not appear to be an optimal sequence of treatment for odontogenic sinusitis; the dental source should be addressed and ESS is frequently also necessary to alleviate symptoms.\n\n\nSUMMARY\nOdontogenic sinusitis has distinct pathophysiology, diagnostic considerations, microbiology, and treatment strategies whenever compared with chronic rhinosinusitis. Clinicians who can accurately identify odontogenic sources can increase efficacy of medical and surgical treatments and improve patient outcomes.", "title": "" }, { "docid": "neg:1840250_12", "text": "BACKGROUND\nMicrosurgical resection of arteriovenous malformations (AVMs) located in the language and motor cortex is associated with the risk of neurological deterioration, yet electrocortical stimulation mapping has not been widely used.\n\n\nOBJECTIVE\nTo demonstrate the usefulness of intraoperative mapping with language/motor AVMs.\n\n\nMETHODS\nDuring an 11-year period, mapping was used in 12 of 431 patients (2.8%) undergoing AVM resection (5 patients with language and 7 patients with motor AVMs). Language mapping was performed under awake anesthesia and motor mapping under general anesthesia.\n\n\nRESULTS\nIdentification of a functional cortex enabled its preservation in 11 patients (92%), guided dissection through overlying sulci down to the nidus in 3 patients (25%), and influenced the extent of resection in 4 patients (33%). Eight patients (67%) had complete resections. Four patients (33%) had incomplete resections, with circumferentially dissected and subtotally disconnected AVMs left in situ, attached to areas of eloquence and with preserved venous drainage. All were subsequently treated with radiosurgery. At follow-up, 6 patients recovered completely, 3 patients were neurologically improved, and 3 patients had new neurological deficits.\n\n\nCONCLUSION\nIndications for intraoperative mapping include preoperative functional imaging that identifies the language/motor cortex adjacent to the AVM; larger AVMs with higher Spetzler-Martin grades; and patients presenting with unruptured AVMs without deficits. Mapping identified the functional cortex, promoted careful tissue handling, and preserved function. Mapping may guide dissection to AVMs beneath the cortical surface, and it may impact the decision to resect the AVM completely. More conservative, subtotal circumdissections followed by radiosurgery may be an alternative to observation or radiosurgery alone in patients with larger language/motor cortex AVMs.", "title": "" }, { "docid": "neg:1840250_13", "text": "Correct disassembly of the HIV-1 capsid shell, called uncoating, is increasingly recognised as central for multiple steps during retroviral replication. However, the timing, localisation and mechanism of uncoating are poorly understood and progress in this area is hampered by difficulties in measuring the process. Previous work suggested that uncoating occurs soon after entry of the viral core into the cell, but recent studies report later uncoating, at or in the nucleus. Furthermore, inhibiting reverse transcription delays uncoating, linking these processes. Here, we have used a combined approach of experimental interrogation of viral mutants and mathematical modelling to investigate the timing of uncoating with respect to reverse transcription. By developing a minimal, testable, model and employing multiple uncoating assays to overcome the disadvantages of each single assay, we find that uncoating is not concomitant with the initiation of reverse transcription. Instead, uncoating appears to be triggered once reverse transcription reaches a certain stage, namely shortly after first strand transfer. Using multiple approaches, we have identified a point during reverse transcription that induces uncoating of the HIV-1 CA shell. We propose that uncoating initiates after the first strand transfer of reverse transcription.", "title": "" }, { "docid": "neg:1840250_14", "text": "An accurate rainfall forecasting is very important for agriculture dependent countries like India. For analyzing the crop productivity, use of water resources and pre-planning of water resources, rainfall prediction is important. Statistical techniques for rainfall forecasting cannot perform well for long-term rainfall forecasting due to the dynamic nature of climate phenomena. Artificial Neural Networks (ANNs) have become very popular, and prediction using ANN is one of the most widely used techniques for rainfall forecasting. This paper provides a detailed survey and comparison of different neural network architectures used by researchers for rainfall forecasting. The paper also discusses the issues while applying different neural networks for yearly/monthly/daily rainfall forecasting. Moreover, the paper also presents different accuracy measures used by researchers for evaluating performance of ANN.", "title": "" }, { "docid": "neg:1840250_15", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.", "title": "" }, { "docid": "neg:1840250_16", "text": "The proposed automatic bone age estimation system was based on the phalanx geometric characteristics and carpals fuzzy information. The system could do automatic calibration by analyzing the geometric properties of hand images. Physiological and morphological features are extracted from medius image in segmentation stage. Back-propagation, radial basis function, and support vector machine neural networks were applied to classify the phalanx bone age. In addition, the proposed fuzzy bone age (BA) assessment was based on normalized bone area ratio of carpals. The result reveals that the carpal features can effectively reduce classification errors when age is less than 9 years old. Meanwhile, carpal features will become less influential to assess BA when children grow up to 10 years old. On the other hand, phalanx features become the significant parameters to depict the bone maturity from 10 years old to adult stage. Owing to these properties, the proposed novel BA assessment system combined the phalanxes and carpals assessment. Furthermore, the system adopted not only neural network classifiers but fuzzy bone age confinement and got a result nearly to be practical clinically.", "title": "" }, { "docid": "neg:1840250_17", "text": "This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.", "title": "" }, { "docid": "neg:1840250_18", "text": "This paper presents a soft-start circuit that adopts a pulse-skipping control to prevent inrush current and output voltage overshoot during the start-up period of dc-dc converters. The purpose of the pulse-skipping control is to significantly restrain the increasing rate of the reference voltage of the error amplifier. Thanks to the pulse-skipping mechanism and the duty cycle minimization, the soft-start-up time can be extended and the restriction of the charging current and the capacitance can be relaxed. The proposed soft-start circuit is fully integrated on chip without external components, leading to a reduction in PCB area and cost. A current-mode buck converter is implemented with TSMC 0.35-μm 2P4M CMOS process. Simulation results show the output voltage of the buck converter increases smoothly and inrush current is less than 300 mA.", "title": "" }, { "docid": "neg:1840250_19", "text": "The amount of videos available on the Web is growing explosively. While some videos are very interesting and receive high rating from viewers, many of them are less interesting or even boring. This paper conducts a pilot study on the understanding of human perception of video interestingness, and demonstrates a simple computational method to identify more interesting videos. To this end we first construct two datasets of Flickr and YouTube videos respectively. Human judgements of interestingness are collected and used as the groundtruth for training computational models. We evaluate several off-the-shelf visual and audio features that are potentially useful for predicting interestingness on both datasets. Results indicate that audio and visual features are equally important and the combination of both modalities shows very promising results.", "title": "" } ]
1840251
Dialog state tracking, a machine reading approach using Memory Network
[ { "docid": "pos:1840251_0", "text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.", "title": "" }, { "docid": "pos:1840251_1", "text": "Tracking the user's intention throughout the course of a dialog, called dialog state tracking, is an important component of any dialog system. Most existing spoken dialog systems are designed to work in a static, well-defined domain, and are not well suited to tasks in which the domain may change or be extended over time. This paper shows how recurrent neural networks can be effectively applied to tracking in an extended domain with new slots and values not present in training data. The method is evaluated in the third Dialog State Tracking Challenge, where it significantly outperforms other approaches in the task of tracking the user's goal. A method for online unsupervised adaptation to new domains is also presented. Unsupervised adaptation is shown to be helpful in improving word-based recurrent neural networks, which work directly from the speech recognition results. Word-based dialog state tracking is attractive as it does not require engineering a spoken language understanding system for use in the new domain and it avoids the need for a general purpose intermediate semantic representation.", "title": "" } ]
[ { "docid": "neg:1840251_0", "text": "MITJA D. BACK*, LARS PENKE, STEFAN C. SCHMUKLE, KAROLINE SACHSE, PETER BORKENAU and JENS B. ASENDORPF Department of Psychology, Johannes Gutenberg-University Mainz, Germany Department of Psychology and Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, UK Department of Psychology, Westfälische Wilhelms-University Münster, Germany Department of Psychology, Martin-Luther University Halle-Wittenberg, Germany Department of Psychology, Martin-Luther University Halle-Wittenberg, Germany Department of Psychology, Humboldt University Berlin, Germany", "title": "" }, { "docid": "neg:1840251_1", "text": "In recent years, there has been a huge increase in the number of bots online, varying from Web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content-editing bots in online collaboration communities. The online world has turned into an ecosystem of bots. However, our knowledge of how these automated agents are interacting with each other is rather poor. Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, and sociality and it is hence natural to expect interactions between bots to be relatively predictable and uneventful. In this article, we analyze the interactions between bots that edit articles on Wikipedia. We track the extent to which bots undid each other's edits over the period 2001-2010, model how pairs of bots interact over time, and identify different types of interaction trajectories. We find that, although Wikipedia bots are intended to support the encyclopedia, they often undo each other's edits and these sterile \"fights\" may sometimes continue for years. Unlike humans on Wikipedia, bots' interactions tend to occur over longer periods of time and to be more reciprocated. Yet, just like humans, bots in different cultural environments may behave differently. Our research suggests that even relatively \"dumb\" bots may give rise to complex interactions, and this carries important implications for Artificial Intelligence research. Understanding what affects bot-bot interactions is crucial for managing social media well, providing adequate cyber-security, and designing well functioning autonomous vehicles.", "title": "" }, { "docid": "neg:1840251_2", "text": "The morality of transformational leadership has been sharply questioned, particularly by libertarians, “grass roots” theorists, and organizational development consultants. This paper argues that to be truly transformational, leadership must be grounded in moral foundations. The four components of authentic transformational leadership (idealized influence, inspirational motivation, intellectual stimulation, and individualized consideration) are contrasted with their counterfeits in dissembling pseudo-transformational leadership on the basis of (1) the moral character of the leaders and their concerns for self and others; (2) the ethical values embedded in the leaders’ vision, articulation, and program, which followers can embrace or reject; and (3) the morality of the processes of social ethical choices and action in which the leaders and followers engage and collectively pursue. The literature on transformational leadership is linked to the long-standing literature on virtue and moral character, as exemplified by Socratic and Confucian typologies. It is related as well to the major themes of the modern Western ethical agenda: liberty, utility, and distributive justice Deception, sophistry, and pretense are examined alongside issues of transcendence, agency, trust, striving for congruence in values, cooperative action, power, persuasion, and corporate governance to establish the strategic and moral foundations of authentic transformational leadership.", "title": "" }, { "docid": "neg:1840251_3", "text": "A Smart Tailor Platform is proposed as a venue to integrate various players in garment industry, such as tailors, designers, customers, and other relevant stakeholders to automate its business processes. In, Malaysia, currently the processes are conducted manually which consume too much time in fulfilling its supply and demand for the industry. To facilitate this process, a study was conducted to understand the main components of the business operation. The components will be represented using a strategic management tool namely the Business Model Canvas (BMC). The inception phase of the Rational Unified Process (RUP) was employed to construct the BMC. The phase began by determining the basic idea and structure of the business process. The information gathered was classified into nine related dimensions and documented in accordance with the BMC. The generated BMC depicts the relationship of all the nine dimensions for the garment industry, and thus represents an integrated business model of smart tailor. This smart platform allows the players in the industry to promote, manage and fulfill supply and demands of their product electronically. In addition, the BMC can be used to assist developers in designing and developing the smart tailor platform.", "title": "" }, { "docid": "neg:1840251_4", "text": "The large number of peer-to-peer file-sharing applications can be subdivided in three basic categories: having a mediated, pure or hybrid architecture. This paper details each of these and indicates their respective strengths and weaknesses. In addition to this theoretical study, a number of practical experiments were conducted, with special attention for three popular applications, representative of each of the three architectures. Although a number of measurement studies have been done in the past ([1], [3], etc.) these all investigate only a fraction of the available applications and architectures, with very little focus on the bigger picture and to the recent evolutions in peer-to-peer architectures.", "title": "" }, { "docid": "neg:1840251_5", "text": "Internet advertising is one of the most popular online business models. JavaScript-based advertisements (ads) are often directly embedded in a web publisher's page to display ads relevant to users (e.g., by checking the user's browser environment and page content). However, as third-party code, the ads pose a significant threat to user privacy. Worse, malicious ads can exploit browser vulnerabilities to compromise users' machines and install malware. To protect users from these threats, we propose AdSentry, a comprehensive confinement solution for JavaScript-based advertisements. The crux of our approach is to use a shadow JavaScript engine to sandbox untrusted ads. In addition, AdSentry enables flexible regulation on ad script behaviors by completely mediating its access to the web page (including its DOM) without limiting the JavaScript functionality exposed to the ads. Our solution allows both web publishers and end users to specify access control policies to confine ads' behaviors. We have implemented a proof-of-concept prototype of AdSentry that transparently supports the Mozilla Firefox browser. Our experiments with a number of ads-related attacks successfully demonstrate its practicality and effectiveness. The performance measurement indicates that our system incurs a small performance overhead.", "title": "" }, { "docid": "neg:1840251_6", "text": "Mood classification of music is an emerging domain of music information retrieval. In the approach presented here features extracted from an audio file are used in combination with the affective value of song lyrics to map a song onto a psychologically based emotion space. The motivation behind this system is the lack of intuitive and contextually aware playlist generation tools available to music listeners. The need for such tools is made obvious by the fact that digital music libraries are constantly expanding, thus making it increasingly difficult to recall a particular song in the library or to create a playlist for a specific event. By combining audio content information with context-aware data, such as song lyrics, this system allows the listener to automatically generate a playlist to suit their current activity or mood. Thesis Supervisor: Barry Vercoe Title: Professor of Media Arts and Sciences, Program in Media Arts and Sciences", "title": "" }, { "docid": "neg:1840251_7", "text": "A relevant knowledge [24] (and consequently research area) is the study of software lifecycle process models (PM-SDLCs). Such process models have been defined in three abstraction levels: (i) full organizational software lifecycles process models (e.g. ISO 12207, ISO 15504, CMMI/SW); (ii) lifecycles frameworks models (e.g. waterfall, spiral, RAD, and others) and (iii) detailed software development life cycles process (e.g. unified process, TSP, MBASE, and others). This paper focuses on (ii) and (iii) levels and reports the results of a descriptive/comparative study of 13 PM-SDLCs that permits a plausible explanation of their evolution in terms of common, distinctive, and unique elements as well as of the specification rigor and agility attributes. For it, a conceptual research approach and a software process lifecycle meta-model are used. Findings from the conceptual analysis are reported. Paper ends with the description of research limitations and recommendations for further research.", "title": "" }, { "docid": "neg:1840251_8", "text": "Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.", "title": "" }, { "docid": "neg:1840251_9", "text": "Shear wave elasticity imaging (SWEI) is a new approach to imaging and characterizing tissue structures based on the use of shear acoustic waves remotely induced by the radiation force of a focused ultrasonic beam. SWEI provides the physician with a virtual \"finger\" to probe the elasticity of the internal regions of the body. In SWEI, compared to other approaches in elasticity imaging, the induced strain in the tissue can be highly localized, because the remotely induced shear waves are attenuated fully within a very limited area of tissue in the vicinity of the focal point of a focused ultrasound beam. SWEI may add a new quality to conventional ultrasonic imaging or magnetic resonance imaging. Adding shear elasticity data (\"palpation information\") by superimposing color-coded elasticity data over ultrasonic or magnetic resonance images may enable better differentiation of tissues and further enhance diagnosis. This article presents a physical and mathematical basis of SWEI with some experimental results of pilot studies proving feasibility of this new ultrasonic technology. A theoretical model of shear oscillations in soft biological tissue remotely induced by the radiation force of focused ultrasound is described. Experimental studies based on optical and magnetic resonance imaging detection of these shear waves are presented. Recorded spatial and temporal profiles of propagating shear waves fully confirm the results of mathematical modeling. Finally, the safety of the SWEI method is discussed, and it is shown that typical ultrasonic exposure of SWEI is significantly below the threshold of damaging effects of focused ultrasound.", "title": "" }, { "docid": "neg:1840251_10", "text": "Despite surveillance systems becoming increasingly ubiquitous in our living environment, automated surveillance, currently based on video sensory modality and machine intelligence, lacks most of the time the robustness and reliability required in several real applications. To tackle this issue, audio sensory devices have been incorporated, both alone or in combination with video, giving birth in the past decade, to a considerable amount of research. In this article, audio-based automated surveillance methods are organized into a comprehensive survey: A general taxonomy, inspired by the more widespread video surveillance field, is proposed to systematically describe the methods covering background subtraction, event classification, object tracking, and situation analysis. For each of these tasks, all the significant works are reviewed, detailing their pros and cons and the context for which they have been proposed. Moreover, a specific section is devoted to audio features, discussing their expressiveness and their employment in the above-described tasks. Differing from other surveys on audio processing and analysis, the present one is specifically targeted to automated surveillance, highlighting the target applications of each described method and providing the reader with a systematic and schematic view useful for retrieving the most suited algorithms for each specific requirement.", "title": "" }, { "docid": "neg:1840251_11", "text": "An output-capacitorless low-dropout regulator (LDO) with a direct voltage-spike detection circuit is presented in this paper. The proposed voltage-spike detection is based on capacitive coupling. The detection circuit makes use of the rapid transient voltage at the LDO output to increase the bias current momentarily. Hence, the transient response of the LDO is significantly enhanced due to the improvement of the slew rate at the gate of the power transistor. The proposed voltage-spike detection circuit is applied to an output-capacitorless LDO implemented in a standard 0.35-¿m CMOS technology (where VTHN ¿ 0.5 V and VTHP ¿ -0.65 V). Experimental results show that the LDO consumes 19 ¿A only. It regulates the output at 0.8 V from a 1-V supply, with dropout voltage of 200 mV at the maximum output current of 66.7 mA. The voltage spike and the recovery time of the LDO with the proposed voltage-spike detection circuit are reduced to about 70 mV and 3 ¿s, respectively, whereas they are more than 420 mV and 30 ¿s for the LDO without the proposed detection circuit.", "title": "" }, { "docid": "neg:1840251_12", "text": "Web-based social systems enable new community-based opportunities for participants to engage, share, and interact. This community value and related services like search and advertising are threatened by spammers, content polluters, and malware disseminators. In an effort to preserve community value and ensure longterm success, we propose and evaluate a honeypot-based approach for uncovering social spammers in online social systems. Two of the key components of the proposed approach are: (1) The deployment of social honeypots for harvesting deceptive spam profiles from social networking communities; and (2) Statistical analysis of the properties of these spam profiles for creating spam classifiers to actively filter out existing and new spammers. We describe the conceptual framework and design considerations of the proposed approach, and we present concrete observations from the deployment of social honeypots in MySpace and Twitter. We find that the deployed social honeypots identify social spammers with low false positive rates and that the harvested spam data contains signals that are strongly correlated with observable profile features (e.g., content, friend information, posting patterns, etc.). Based on these profile features, we develop machine learning based classifiers for identifying previously unknown spammers with high precision and a low rate of false positives.", "title": "" }, { "docid": "neg:1840251_13", "text": "In this paper we propose a novel method to utilize the skeletal structure not only for supporting force but for releasing heat by latent heat.", "title": "" }, { "docid": "neg:1840251_14", "text": "Introduction to Derivative-Free Optimization Andrew R. Conn, Katya Scheinberg, and Luis N. Vicente The absence of derivatives, often combined with the presence of noise or lack of smoothness, is a major challenge for optimization. This book explains how sampling and model techniques are used in derivative-free methods and how these methods are designed to efficiently and rigorously solve optimization problems. Although readily accessible to readers with a modest background in computational mathematics, it is also intended to be of interest to researchers in the field. 2009 · xii + 277 pages · Softcover · ISBN 978-0-898716-68-9 List Price $73.00 · RUNDBRIEF Price $51.10 · Code MP08", "title": "" }, { "docid": "neg:1840251_15", "text": "Narcissism has been a perennial topic for psychoanalytic papers since Freud's 'On narcissism: An introduction' (1914). The understanding of this field has recently been greatly furthered by the analytical writings of Kernberg and Kohut despite, or perhaps because of, their glaring disagreements. Despite such theoretical advances, clinical theory has far outpaced clinical practice. This paper provides a clarification of the characteristics, diagnosis and development of the narcissistic personality disorder and draws out the differing treatment implications, at various levels of psychological intensity, of the two theories discussed.", "title": "" }, { "docid": "neg:1840251_16", "text": "A simple photovoltaic (PV) system capable of operating in grid-connected mode and using multilevel boost converter (MBC) and line commutated inverter (LCI) has been developed for extracting the maximum power and feeding it to a single phase utility grid with harmonic reduction. Theoretical analysis of the proposed system is done and the duty ratio of the MBC is estimated for extracting maximum power from PV array. For a fixed firing angle of LCI, the proposed system is able to track the maximum power with the determined duty ratio which remains the same for all irradiations. This is the major advantage of the proposed system which eliminates the use of a separate maximum power point tracking (MPPT) Experiments have been conducted for feeding a single phase voltage to the grid. So by proper and simplified technique we are reducing the harmonics in the grid for unbalanced loads.", "title": "" }, { "docid": "neg:1840251_17", "text": "A Ka-band highly linear power amplifier (PA) is implemented in 28-nm bulk CMOS technology. Using a deep class-AB PA topology with appropriate harmonic control circuit, highly linear and efficient PAs are designed at millimeter-wave band. This PA architecture provides a linear PA operation close to the saturated power. Also elaborated harmonic tuning and neutralization techniques are used to further improve the transistor gain and stability. A two-stack PA is designed for higher gain and output power than a common source (CS) PA. Additionally, average power tracking (APT) is applied to further reduce the power consumption at a low power operation and, hence, extend battery life. Both the PAs are tested with two different signals at 28.5 GHz; they are fully loaded long-term evolution (LTE) signal with 16-quadrature amplitude modulation (QAM), a 7.5-dB peakto-average power ratio (PAPR), and a 20-MHz bandwidth (BW), and a wireless LAN (WLAN) signal with 64-QAM, a 10.8-dB PAPR, and an 80-MHz BW. The CS/two-stack PAs achieve power-added efficiency (PAE) of 27%/25%, error vector magnitude (EVM) of 5.17%/3.19%, and adjacent channel leakage ratio (ACLRE-UTRA) of -33/-33 dBc, respectively, with an average output power of 11/14.6 dBm for the LTE signal. For the WLAN signal, the CS/2-stack PAs achieve the PAE of 16.5%/17.3%, and an EVM of 4.27%/4.21%, respectively, at an average output power of 6.8/11 dBm.", "title": "" } ]
1840252
A Systematic Classification of Knowledge, Reasoning, and Context within the ARC Dataset
[ { "docid": "pos:1840252_0", "text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.", "title": "" }, { "docid": "pos:1840252_1", "text": "tion can machines think? by replacing it with another, namely can a machine pass the imitation game (the Turing test). In the years since, this test has been criticized as being a poor replacement for the original enquiry (for example, Hayes and Ford [1995]), which raises the question: what would a better replacement be? In this article, we argue that standardized tests are an effective and practical assessment of many aspects of machine intelligence, and should be part of any comprehensive measure of AI progress. While a crisp definition of machine intelligence remains elusive, we can enumerate some general properties we might expect of an intelligent machine. The list is potentially long (for example, Legg and Hutter [2007]), but should at least include the ability to (1) answer a wide variety of questions, (2) answer complex questions, (3) demonstrate commonsense and world knowledge, and (4) acquire new knowledge scalably. In addition, a suitable test should be clearly measurable, graduated (have a variety of levels of difficulty), not gameable, ambitious but realistic, and motivating. There are many other requirements we might add (for example, capabilities in robotics, vision, dialog), and thus any comprehensive measure of AI is likely to require a battery of different tests. However, standardized tests meet a surprising number of requirements, including the four listed, and thus should be a key component of a future battery of tests. As we will show, the tests require answering a wide variety of questions, including those requiring commonsense and world knowledge. In addition, they meet all the practical requirements, a huge advantage for any component of a future test of AI. Articles", "title": "" }, { "docid": "pos:1840252_2", "text": "Science Study Book Corpus Document Filter [...] enters a d orbital. The valence electrons (those added after the last noble gas configuration) in these elements include the ns and (n \\u2013 1) d electrons. The official IUPAC definition of transition elements specifies those with partially filled d orbitals. Thus, the elements with completely filled orbitals (Zn, Cd, Hg, as well as Cu, Ag, and Au in Figure 6.30) are not technically transition elements. However, the term is frequently used to refer to the entire d block (colored yellow in Figure 6.30), and we will adopt this usage in this textbook. Inner transition elements are metallic elements in which the last electron added occupies an f orbital.", "title": "" } ]
[ { "docid": "neg:1840252_0", "text": "BACKGROUND\nVarious cutaneous side-effects, including, exanthema, pruritus, urticaria and Lyell or Stevens-Johnson syndrome, have been reported with meropenem (carbapenem), a rarely-prescribed antibiotic. Levofloxacin (fluoroquinolone), a more frequently prescribed antibiotic, has similar cutaneous side-effects, as well as photosensitivity. We report a case of cutaneous hyperpigmentation induced by meropenem and levofloxacin.\n\n\nPATIENTS AND METHODS\nA 67-year-old male was treated with meropenem (1g×4 daily), levofloxacin (500mg twice daily) and amikacin (500mg daily) for 2 weeks, followed by meropenem, levofloxacin and rifampicin (600mg twice daily) for 4 weeks for osteitis of the fifth metatarsal. Three weeks after initiation of antibiotic therapy, dark hyperpigmentation appeared on the lower limbs, predominantly on the anterior aspects of the legs. Histology revealed dark, perivascular and interstitial deposits throughout the dermis, which stained with both Fontana-Masson and Perls stains. Infrared microspectroscopy revealed meropenem in the dermis of involved skin. After withdrawal of the antibiotics, the pigmentation subsided slowly.\n\n\nDISCUSSION\nSimilar cases of cutaneous hyperpigmentation have been reported after use of minocycline. In these cases, histological examination also showed iron and/or melanin deposits within the dermis, but the nature of the causative pigment remains unclear. In our case, infrared spectroscopy enabled us to identify meropenem in the dermis. Two cases of cutaneous hyperpigmentation have been reported following use of levofloxacin, and the results of histological examination were similar. This is the first case of cutaneous hyperpigmentation induced by meropenem.", "title": "" }, { "docid": "neg:1840252_1", "text": "This paper presents a second-order pulsewidth modulation (PWM) feedback loop to improve power supply rejection (PSR) of any open-loop PWM class-D amplifiers (CDAs). PSR of the audio amplifier has always been a key parameter in mobile phone applications. In contrast to class-AB amplifiers, the poor PSR performance has always been the major drawback for CDAs with a half-bridge connected power stage. The proposed PWM feedback loop is fabricated using GLOBALFOUNDRIES' 0.18-μm CMOS process technology. The measured PSR is more than 80 dB and the measured total harmonic distortion is less than 0.04% with a 1-kHz input sinusoidal test tone.", "title": "" }, { "docid": "neg:1840252_2", "text": "Due to the growing popularity of indoor location-based services, indoor data management has received significant research attention in the past few years. However, we observe that the existing indexing and query processing techniques for the indoor space do not fully exploit the properties of the indoor space. Consequently, they provide below par performance which makes them unsuitable for large indoor venues with high query workloads. In this paper, we propose two novel indexes called Indoor Partitioning Tree (IPTree) and Vivid IP-Tree (VIP-Tree) that are carefully designed by utilizing the properties of indoor venues. The proposed indexes are lightweight, have small pre-processing cost and provide nearoptimal performance for shortest distance and shortest path queries. We also present efficient algorithms for other spatial queries such as k nearest neighbors queries and range queries. Our extensive experimental study on real and synthetic data sets demonstrates that our proposed indexes outperform the existing algorithms by several orders of magnitude.", "title": "" }, { "docid": "neg:1840252_3", "text": "BACKGROUND\nMalnutrition is still highly prevalent in developing countries. Schoolchildren may also be at high nutritional risk, not only under-five children. However, their nutritional status is poorly documented, particularly in urban areas. The paucity of information hinders the development of relevant nutrition programs for schoolchildren. The aim of this study carried out in Ouagadougou was to assess the nutritional status of schoolchildren attending public and private schools.\n\n\nMETHODS\nThe study was carried out to provide baseline data for the implementation and evaluation of the Nutrition Friendly School Initiative of WHO. Six intervention schools and six matched control schools were selected and a sample of 649 schoolchildren (48% boys) aged 7-14 years old from 8 public and 4 private schools were studied. Anthropometric and haemoglobin measurements, along with thyroid palpation, were performed. Serum retinol was measured in a random sub-sample of children (N = 173). WHO criteria were used to assess nutritional status. Chi square and independent t-test were used for proportions and mean comparisons between groups.\n\n\nRESULTS\nMean age of the children (48% boys) was 11.5 ± 1.2 years. Micronutrient malnutrition was highly prevalent, with 38.7% low serum retinol and 40.4% anaemia. The prevalence of stunting was 8.8% and that of thinness, 13.7%. The prevalence of anaemia (p = 0.001) and vitamin A deficiency (p < 0.001) was significantly higher in public than private schools. Goitre was not detected. Overweight/obesity was low (2.3%) and affected significantly more children in private schools (p = 0.009) and younger children (7-9 y) (p < 0.05). Thinness and stunting were significantly higher in peri-urban compared to urban schools (p < 0.05 and p = 0.004 respectively). Almost 15% of the children presented at least two nutritional deficiencies.\n\n\nCONCLUSION\nThis study shows that malnutrition and micronutrient deficiencies are also widely prevalent in schoolchildren in cities, and it underlines the need for nutrition interventions to target them.", "title": "" }, { "docid": "neg:1840252_4", "text": "Every year the number of installed wind power plants in the world increases. The horizontal axis wind turbine is the most common type of turbine but there exist other types. Here, three different wind turbines are considered; the horizontal axis wind turbine and two different concepts of vertical axis wind turbines; the Darrieus turbine and the H-rotor. This paper aims at making a comparative study of these three different wind turbines from the most important aspects including structural dynamics, control systems, maintenance, manufacturing and electrical equipment. A case study is presented where three different turbines are compared to each other. Furthermore, a study of blade areas for different turbines is presented. The vertical axis wind turbine appears to be advantageous to the horizontal axis wind turbine in several aspects. r 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840252_5", "text": "The sharing economy has quickly become a very prominent subject of research in the broader computing literature and the in human--computer interaction (HCI) literature more specifically. When other computing research areas have experienced similarly rapid growth (e.g. human computation, eco-feedback technology), early stage literature reviews have proved useful and influential by identifying trends and gaps in the literature of interest and by providing key directions for short- and long-term future work. In this paper, we seek to provide the same benefits with respect to computing research on the sharing economy. Specifically, following the suggested approach of prior computing literature reviews, we conducted a systematic review of sharing economy articles published in the Association for Computing Machinery Digital Library to investigate the state of sharing economy research in computing. We performed this review with two simultaneous foci: a broad focus toward the computing literature more generally and a narrow focus specifically on HCI literature. We collected a total of 112 sharing economy articles published between 2008 and 2017 and through our analysis of these papers, we make two core contributions: (1) an understanding of the computing community's contributions to our knowledge about the sharing economy, and specifically the role of the HCI community in these contributions (i.e.what has been done) and (2) a discussion of under-explored and unexplored aspects of the sharing economy that can serve as a partial research agenda moving forward (i.e.what is next to do).", "title": "" }, { "docid": "neg:1840252_6", "text": "Mining association rules associates events that took place together. In market basket analysis, these discovered rules associate items purchased together. Items that are not part of a transaction are not considered. In other words, typical association rules do not take into account items that are part of the domain but that are not together part of a transaction. Association rules are based on frequencies and count the transactions where items occur together. However, counting absences of items is prohibitive if the number of possible items is very large, which is typically the case. Nonetheless, knowing the relationship between the absence of an item and the presence of another can be very important in some applications. These rules are called negative association rules. We review current approaches for mining negative association rules and we discuss limitations and future research directions.", "title": "" }, { "docid": "neg:1840252_7", "text": "We describe the Lightweight Communications and Marshalling (LCM) library for message passing and data marshalling. The primary goal of LCM is to simplify the development of low-latency message passing systems, especially for real-time robotics research applications.", "title": "" }, { "docid": "neg:1840252_8", "text": "Document summarization and keyphrase extraction are two related tasks in the IR and NLP fields, and both of them aim at extracting condensed representations from a single text document. Existing methods for single document summarization and keyphrase extraction usually make use of only the information contained in the specified document. This article proposes using a small number of nearest neighbor documents to improve document summarization and keyphrase extraction for the specified document, under the assumption that the neighbor documents could provide additional knowledge and more clues. The specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results on the Document Understanding Conference (DUC) benchmark datasets demonstrate the effectiveness and robustness of our proposed approaches. The cross-document sentence relationships in the expanded document set are validated to be beneficial to single document summarization, and the word cooccurrence relationships in the neighbor documents are validated to be very helpful to single document keyphrase extraction.", "title": "" }, { "docid": "neg:1840252_9", "text": "Corrosion can cause section loss or cracks in the steel members which is one of the most important causes of deterioration of steel bridges. For some critical components of a steel bridge, it is fatal and could even cause the collapse of the whole bridge. Nowadays the most common approach to steel bridge inspection is visual inspection by inspectors with inspection trucks. This paper mainly presents a climbing robot with magnetic wheels which can move on the surface of steel bridge. Experiment results shows that the climbing robot can move on the steel bridge freely without disrupting traffic to reduce the risks to the inspectors.", "title": "" }, { "docid": "neg:1840252_10", "text": "We present the analytical capability of TecDEM, a MATLAB toolbox used in conjunction with Global DEMs for the extraction of tectonic geomorphologic information. TecDEM includes a suite of algorithms to analyze topography, extracted drainage networks and sub-basins. The aim of part 2 of this paper series is the generation of morphometric maps for surface dynamics and basin analysis. TecDEM therefore allows the extraction of parameters such as isobase, incision, drainage density and surface roughness maps. We also provide tools for basin asymmetry and hypsometric analysis. These are efficient graphical user interfaces (GUIs) for mapping drainage deviation from basin mid-line and basin hypsometry. A morphotectonic interpretation of the Kaghan Valley (Northern Pakistan) is performed with TecDEM and the findings indicate a high correlation between surface dynamics and basin analysis parameters with neotectonic features in the study area. & 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840252_11", "text": "Graphene has exceptional optical, mechanical, and electrical properties, making it an emerging material for novel optoelectronics, photonics, and flexible transparent electrode applications. However, the relatively high sheet resistance of graphene is a major constraint for many of these applications. Here we propose a new approach to achieve low sheet resistance in large-scale CVD monolayer graphene using nonvolatile ferroelectric polymer gating. In this hybrid structure, large-scale graphene is heavily doped up to 3 × 10(13) cm(-2) by nonvolatile ferroelectric dipoles, yielding a low sheet resistance of 120 Ω/□ at ambient conditions. The graphene-ferroelectric transparent conductors (GFeTCs) exhibit more than 95% transmittance from the visible to the near-infrared range owing to the highly transparent nature of the ferroelectric polymer. Together with its excellent mechanical flexibility, chemical inertness, and the simple fabrication process of ferroelectric polymers, the proposed GFeTCs represent a new route toward large-scale graphene-based transparent electrodes and optoelectronics.", "title": "" }, { "docid": "neg:1840252_12", "text": "The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications.", "title": "" }, { "docid": "neg:1840252_13", "text": "In this paper the transcription and evaluation of the corpus DIMEx100 for Mexican Spanish is presented. First we describe the corpus and explain the linguistic and computational motivation for its design and collection process; then, the phonetic antecedents and the alphabet adopted for the transcription task are presented; the corpus has been transcribed at three different granularity levels, which are also specified in detail. The corpus statistics for each transcription level are also presented. A set of phonetic rules describing phonetic context observed empirically in spontaneous conversation is also validated with the transcription. The corpus has been used for the construction of acoustic models and a phonetic dictionary for the construction of a speech recognition system. Initial performance results suggest that the data can be used to train good quality acoustic models.", "title": "" }, { "docid": "neg:1840252_14", "text": "This paper analyzes the effect of employee recognition, pay, and benefits on job satisfaction. In this cross-sectional study, survey responses from university students in the U.S. (n = 457), Malaysia (n = 347) and Vietnam (n = 391) were analyzed. Employee recognition, pay, and benefits were found to have a significant impact on job satisfaction, regardless of home country income level (high, middle or low income) and culture (collectivist or individualist). However, the effect of benefits on job satisfaction was significantly more important for U.S. respondents than for respondents from Malaysia and Vietnam. The authors conclude that both financial and nonfinancial rewards have a role in influencing job satisfaction, which ultimately impacts employee performance. Theoretical and practical implications for developing effective recruitment and retention policies for employees are also discussed.", "title": "" }, { "docid": "neg:1840252_15", "text": "In TEFL, it is often stated that communication presupposes comprehension. The main purpose of readability studies is thus to measure the comprehensibility of a piece of writing. In this regard, different readability measures were initially devised to help educators select passages suitable for both children and adults. However, readability formulas can certainly be extremely helpful in the realm of EFL reading. They were originally designed to assess the suitability of books for students at particular grade levels or ages. Nevertheless, they can be used as basic tools in determining certain crucial EFL text-characteristics instrumental in the skill of reading and its related issues. The aim of the present paper is to familiarize the readers with the most frequently used readability formulas as well as the pros and cons views toward the use of such formulas. Of course, this part mostly illustrates studies done on readability formulas with the results obtained. The main objective of this part is to help readers to become familiar with the background of the formulas, the theory on which they stand, what they are good for and what they are not with regard to a number of studies cited in this section.", "title": "" }, { "docid": "neg:1840252_16", "text": "Anemia resulting from iron deficiency is one of the most prevalent diseases in the world. As iron has important roles in several biological processes such as oxygen transport, DNA synthesis and cell growth, there is a high need for iron therapies that result in high iron bioavailability with minimal toxic effects to treat patients suffering from anemia. This study aims to develop a novel oral iron-complex formulation based on hemin-loaded polymeric micelles composed of the biodegradable and thermosensitive polymer methoxy-poly(ethylene glycol)-b-poly[N-(2-hydroxypropyl)methacrylamide-dilactate], abbreviated as mPEG-b-p(HPMAm-Lac2). Hemin-loaded micelles were prepared by addition of hemin dissolved in DMSO:DMF (1:9, one volume) to an aqueous polymer solution (nine volumes) of mPEG-b-p(HPMAm-Lac2) followed by rapidly heating the mixture at 50°C to form hemin-loaded micelles that remain intact at room and physiological temperature. The highest loading capacity for hemin in mPEG-b-p(HPMAm-Lac2) micelles was 3.9%. The average particle diameter of the hemin-micelles ranged from 75 to 140nm, depending on the concentration of hemin solution that was used to prepare the micelles. The hemin-loaded micelles were stable at pH 2 for at least 3 h which covers the residence time of the formulation in the stomach after oral administration and up to 17 h at pH 7.4 which is sufficient time for uptake of the micelles by the enterocytes. Importantly, incubation of Caco-2 cells with hemin-micelles for 24 h at 37°C resulted in ferritin levels of 2500ng/mg protein which is about 10-fold higher than levels observed in cells incubated with iron sulfate under the same conditions. The hemin formulation also demonstrated superior cell viability compared to iron sulfate with and without ascorbic acid. The study presented here demonstrates the development of a promising novel iron complex for oral delivery.", "title": "" }, { "docid": "neg:1840252_17", "text": "In this paper, we investigate the efficiency of FPGA implementations of AES and AES-like ciphers, specially in the context of authenticated encryption. We consider the encryption/decryption and the authentication/verification structures of OCB-like modes (like OTR or SCT modes). Their main advantage is that they are fully parallelisable. While this feature has already been used to increase the throughput/performance of hardware implementations, it is usually overlooked while comparing different ciphers. We show how to use it with zero area overhead, leading to a very significant efficiency gain. Additionally, we show that using FPGA technology mapping instead of logic optimization, the area of both the linear and non linear parts of the round function of several AES-like primitives can be reduced, without affecting the run-time performance. We provide the implementation results of two multi-stream implementations of both the LED and AES block ciphers. The AES implementation in this paper achieves an efficiency of 38 Mbps/slice, which is the most efficient implementation in literature, to the best of our knowledge. For LED, achieves 2.5 Mbps/slice on Spartan 3 FPGA, which is 2.57x better than the previous implementation. Besides, we use our new techniques to optimize the FPGA implementation of the CAESAR candidate Deoxys-I in both the encryption only and encryption/decryption settings. Finally, we show that the efficiency gains of the proposed techniques extend to other technologies, such as ASIC, as well.", "title": "" }, { "docid": "neg:1840252_18", "text": "An algorithm is presented to perform connected component labeling of images of arbitrary dimension that are represented by a linear bintree. The bintree is a generalization of the quadtree data structure that enables dealing with images of arbitrary dimension. The linear bintree is a pointerless representation. The algorithm uses an active border which is represented by linked lists instead of arrays. This results in a significant reduction in the space requirements, thereby making it feasible to process threeand higher dimensional images. Analysis of the execution time of the algorithm shows almost linear behavior with respect to the number of leaf nodes in the image, and empirical tests are in agreement. The algorithm can be modified easily to compute a ( d 1)-dimensional boundary measure (e.g., perimeter in two dimensions and surface area in three dimensions) with linear", "title": "" }, { "docid": "neg:1840252_19", "text": "The demand for accurate and reliable positioning in industrial applications, especially in robotics and high-precision machines, has led to the increased use of Harmonic Drives. The unique performance features of harmonic drives, such as high reduction ratio and high torque capacity in a compact geometry, justify their widespread application. However, nonlinear torsional compliance and friction are the most fundamental problems in these components and accurate modelling of the dynamic behaviour is expected to improve the performance of the system. This paper offers a model for torsional compliance of harmonic drives. A statistical measure of variation is defined, by which the reliability of the estimated parameters for different operating conditions, as well as the accuracy and integrity of the proposed model, are quantified. The model performance is assessed by simulation to verify the experimental results. Two test setups have been developed and built, which are employed to evaluate experimentally the behaviour of the system. Each setup comprises a different type of harmonic drive, namely the high load torque and the low load torque harmonic drive. The results show an accurate match between the simulation torque obtained from the identified model and the measured torque from the experiment, which indicates the reliability of the proposed model.", "title": "" } ]
1840253
Analysis of Credit Card Fraud Detection Techniques
[ { "docid": "pos:1840253_0", "text": "Credit evaluation is one of the most important and difficult tasks for credit card companies, mortgage companies, banks and other financial institutes. Incorrect credit judgement causes huge financial losses. This work describes the use of an evolutionary-fuzzy system capable of classifying suspicious and non-suspicious credit card transactions. The paper starts with the details of the system used in this work. A series of experiments are described, showing that the complete system is capable of attaining good accuracy and intelligibility levels for real data.", "title": "" }, { "docid": "pos:1840253_1", "text": "We apply Artificial Immune Systems(AIS) [4] for credit card fraud detection and we compare it to other methods such as Neural Nets(NN) [8] and Bayesian Nets(BN) [2], Naive Bayes(NB) and Decision Trees(DT) [13]. Exhaustive search and Genetic Algorithm(GA) [7] are used to select optimized parameters sets, which minimizes the fraud cost for a credit card database provided by a Brazilian card issuer. The specifics of the fraud database are taken into account, such as skewness of data and different costs associated with false positives and negatives. Tests are done with holdout sample sets, and all executions are run using Weka [18], a publicly available software. Our results are consistent with the early result of Maes in [12] which concludes that BN is better than NN, and this occurred in all our evaluated tests. Although NN is widely used in the market today, the evaluated implementation of NN is among the worse methods for our database. In spite of a poor behavior if used with the default parameters set, AIS has the best performance when parameters optimized by GA are used.", "title": "" } ]
[ { "docid": "neg:1840253_0", "text": "This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefits from recent advances in deep learning to 2D-3D exemplar detection. We applied our method to two tasks: instance detection, where we evaluated on the IKEA dataset [36], and object category detection, where we out-perform Aubry et al. [3] for \"chair\" detection on a subset of the Pascal VOC dataset.", "title": "" }, { "docid": "neg:1840253_1", "text": "This paper presents a graph signal denoising method with the trilateral filter defined in the graph spectral domain. The original trilateral filter (TF) is a data-dependent filter that is widely used as an edge-preserving smoothing method for image processing. However, because of the data-dependency, one cannot provide its frequency domain representation. To overcome this problem, we establish the graph spectral domain representation of the data-dependent filter, i.e., a spectral graph TF (SGTF). This representation enables us to design an effective graph signal denoising filter with a Tikhonov regularization. Moreover, for the proposed graph denoising filter, we provide a parameter optimization technique to search for a regularization parameter that approximately minimizes the mean squared error w.r.t. the unknown graph signal of interest. Comprehensive experimental results validate our graph signal processing-based approach for images and graph signals.", "title": "" }, { "docid": "neg:1840253_2", "text": "The area under the ROC curve (AUC) is a very widely used measure of performance for classification and diagnostic rules. It has the appealing property of being objective, requiring no subjective input from the user. On the other hand, the AUC has disadvantages, some of which are well known. For example, the AUC can give potentially misleading results if ROC curves cross. However, the AUC also has a much more serious deficiency, and one which appears not to have been previously recognised. This is that it is fundamentally incoherent in terms of misclassification costs: the AUC uses different misclassification cost distributions for different classifiers. This means that using the AUC is equivalent to using different metrics to evaluate different classification rules. It is equivalent to saying that, using one classifier, misclassifying a class 1 point is p times as serious as misclassifying a class 0 point, but, using another classifier, misclassifying a class 1 point is P times as serious, where p≠P. This is nonsensical because the relative severities of different kinds of misclassifications of individual points is a property of the problem, not the classifiers which happen to have been chosen. This property is explored in detail, and a simple valid alternative to the AUC is proposed.", "title": "" }, { "docid": "neg:1840253_3", "text": "The use of renewables materials for industrial applications is becoming impellent due to the increasing demand of alternatives to scarce and unrenewable petroleum supplies. In this regard, nanocrystalline cellulose, NCC, derived from cellulose, the most abundant biopolymer, is one of the most promising materials. NCC has unique features, interesting for the development of new materials: the abundance of the source cellulose, its renewability and environmentally benign nature, its mechanical properties and its nano-scaled dimensions open a wide range of possible properties to be discovered. One of the most promising uses of NCC is in polymer matrix nanocomposites, because it can provide a significant reinforcement. This review provides an overview on this emerging nanomaterial, focusing on extraction procedures, especially from lignocellulosic biomass, and on technological developments and applications of NCC-based materials. Challenges and future opportunities of NCC-based materials will be are discussed as well as obstacles remaining for their large use.", "title": "" }, { "docid": "neg:1840253_4", "text": "In this paper, we utilize distributed word representations (i.e., word embeddings) to analyse the representation of semantics in brain activity. The brain activity data were recorded using functional magnetic resonance imaging (fMRI) when subjects were viewing words. First, we analysed the functional selectivity of different cortex areas by calculating the correlations between neural responses and several types of word representations, including skipgram word embeddings, visual semantic vectors, and primary visual features. The results demonstrated consistency with existing neuroscientific knowledge. Second, we utilized behavioural data as the semantic ground truth to measure their relevance with brain activity. A method to estimate word embeddings under the constraints of brain activity similarities is further proposed based on the semantic word embedding (SWE) model. The experimental results show that the brain activity data are significantly correlated with the behavioural data of human judgements on semantic similarity. The correlations between the estimated word embeddings and the semantic ground truth can be effectively improved after integrating the brain activity data for learning, which implies that semantic patterns in neural representations may exist that have not been fully captured by state-of-the-art word embeddings derived from text corpora.", "title": "" }, { "docid": "neg:1840253_5", "text": "The genetic divergence, population genetic structure, and possible speciation of the Korean firefly, Pyrocoelia rufa, were investigated on the midsouthern Korean mainland, coastal islets, a remote offshore island, Jedu-do, and Tsushima Island in Japan. Analysis of DNA sequences from the mitochondrial COI protein-coding gene revealed 20 mtDNA-sequence-based haplotypes with a maximum divergence of 5.5%. Phylogenetic analyses using PAUP, PHYLIP, and networks subdivided the P. rufa into two clades (termed clade A and B) and the minimum nucleotide divergence between them was 3.7%. Clade A occurred throughout the Korean mainland and the coastal islets and Tsushima Island in Japan, whereas clade B was exclusively found on Jeju-do Island. In the analysis of the population genetic structure, clade B formed an independent phylogeographic group, but clade A was further subdivided into three groups: two covering western and eastern parts of the Korean peninsula, respectively, and the other occupying one eastern coastal islet and Japanese Tsushima Island. Considering both phylogeny and population structure of P. rufa, the Jeju-do Island population is obviously differentiated from other P. rufa populations, but the Tsushima Island population was a subset of the Korean coastal islet, Geoje. We interpreted the isolation of the Jeju-do population and the grouping of Tsushima Island with Korean coastal islets in terms of Late Pleistocene–Holocene events. The eastern–western subdivision on the Korean mainland was interpreted partially by the presence of a large major mountain range, which bisects the midpart of the Korean peninsula into western and eastern parts.", "title": "" }, { "docid": "neg:1840253_6", "text": "In this paper we introduce a framework for learning from RDF data using graph kernels that count substructures in RDF graphs, which systematically covers most of the existing kernels previously defined and provides a number of new variants. Our definitions include fast kernel variants that are computed directly on the RDF graph. To improve the performance of these kernels we detail two strategies. The first strategy involves ignoring the vertex labels that have a low frequency among the instances. Our second strategy is to remove hubs to simplify the RDF graphs. We test our kernels in a number of classification experiments with real-world RDF datasets. Overall the kernels that count subtrees show the best performance. However, they are closely followed by simple bag of labels baseline kernels. The direct kernels substantially decrease computation time, while keeping performance the same. For the walks counting kernel the decrease in computation time of the approximation is so large that it thereby becomes a computationally viable kernel to use. Ignoring low frequency labels improves the performance for all datasets. The hub removal algorithm increases performance on two out of three of our smaller datasets, but has little impact when used on our larger datasets.", "title": "" }, { "docid": "neg:1840253_7", "text": "Underground is a challenging environment for wireless communication since the propagation medium is no longer air but soil, rock and water. The well established wireless communication techniques using electromagnetic (EM) waves do not work well in this environment due to three problems: high path loss, dynamic channel condition and large antenna size. New techniques using magnetic induction (MI) can solve two of the three problems (dynamic channel condition and large antenna size), but may still cause even higher path loss. In this paper, a complete characterization of the underground MI communication channel is provided. Based on the channel model, the MI waveguide technique for communication is developed in order to reduce the MI path loss. The performance of the traditional EM wave systems, the current MI systems and our improved MI waveguide system are quantitatively compared. The results reveal that our MI waveguide system has much lower path loss than the other two cases for any channel conditions.", "title": "" }, { "docid": "neg:1840253_8", "text": "We propose a non-permanent add-on that enables plenoptic imaging with standard cameras. Our design is based on a physical copying mechanism that multiplies a sensor image into a number of identical copies that still carry the plenoptic information of interest. Via different optical filters, we can then recover the desired information. A minor modification of the design also allows for aperture sub-sampling and, hence, light-field imaging. As the filters in our design are exchangeable, a reconfiguration for different imaging purposes is possible. We show in a prototype setup that high dynamic range, multispectral, polarization, and light-field imaging can be achieved with our design.", "title": "" }, { "docid": "neg:1840253_9", "text": "Pair Programming is an innovative collaborative software development methodology. Anecdotal and empirical evidence suggests that this agile development method produces better quality software in reduced time with higher levels of developer satisfaction. To date, little explanation has been offered as to why these improved performance outcomes occur. In this qualitative study, we focus on how individual differences, and specifically task conflict, impact results of the collaborative software development process and related outcomes. We illustrate that low to moderate levels of task conflict actually enhance performance, while high levels mitigate otherwise anticipated positive results.", "title": "" }, { "docid": "neg:1840253_10", "text": "Studies have recommended usability criteria for evaluating Enterprise Resource Planning (ERP) systems. However these criteria do not provide sufficient qualitative information regarding the behaviour of users when interacting with the user interface of these systems. A triangulation technique, including the use of time diaries, can be used in Human Computer Interaction (HCI) research for providing additional qualitative data that cannot be accurately collected by experimental or even observation means alone.\n Limited studies have been performed on the use of time diaries in a triangulation approach as an HCI research method for the evaluation of the usability of ERP systems. This paper reports on a case study where electronic time diaries were used in conjunction with other HCI research methods, namely, surveys and usability questionnaires, in order to evaluate the usability of an ERP system. The results of the study show that a triangulation technique including the use of time diaries is a rich and useful method that allows more flexibility for respondents and can be used to help understand user behaviour when interacting with ERP systems. A thematic analysis of the qualitative data collected from the time diaries validated the quantitative data and highlighted common problem areas encountered during typical tasks performed with the ERP system. An improved understanding of user behaviour enabled the redesign of the tasks performed during the ERP learning process and could provide guidance to ERP designers for improving the usability and ease of learning of ERP systems.", "title": "" }, { "docid": "neg:1840253_11", "text": "Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups/companies focusing on long-range face detection outperform leading commercial applications.", "title": "" }, { "docid": "neg:1840253_12", "text": "In this paper, we describe the development of CiteSpace as an integrated environment for identifying and tracking thematic trends in scientific literature. The goal is to simplify the process of finding not only highly cited clusters of scientific articles, but also pivotal points and trails that are likely to characterize fundamental transitions of a knowledge domain as a whole. The trails of an advancing research field are captured through a sequence of snapshots of its intellectual structure over time in the form of Pathfinder networks. These networks are subsequently merged with a localized pruning algorithm. Pivotal points in the merged network are algorithmically identified and visualized using the betweenness centrality metric. An example of finding clinical evidence associated with reducing risks of heart diseases is included to illustrate how CiteSpace could be used. The contribution of the work is its integration of various change detection algorithms and interactive visualization capabilities to simply users' tasks.", "title": "" }, { "docid": "neg:1840253_13", "text": "This work investigates the effectiveness of learning to rank methods for entity search. Entities are represented by multi-field documents constructed from their RDF triples, and field-based text similarity features are extracted for query-entity pairs. State-of-the-art learning to rank methods learn models for ad-hoc entity search. Our experiments on an entity search test collection based on DBpedia confirm that learning to rank methods are as powerful for ranking entities as for ranking documents, and establish a new state-of-the-art for accuracy on this benchmark dataset.", "title": "" }, { "docid": "neg:1840253_14", "text": "This paper presents the findings of two studies that replicate previous work by Fred Davis on the subject of perceived usefulness, ease of use, and usage of information technology. The two studies focus on evaluating the psychometric properties of the ease of use and usefulness scales, while examining the relationship between ease of use, usefulness, and system usage. Study 1 provides a strong assessment of the convergent validity of the two scales by examining heterogeneous user groups dealing with heterogeneous implementations of messaging technology. In addition, because one might expect users to share similar perspectives about voice and electronic mail, the study also represents a strong test of discriminant validity. In this study a total of 118 respondents from 10 different organizations were surveyed for their attitudes toward two messaging technologies: voice and electronic mail. Study 2 complements the approach taken in Study 1 by focusing on the ability to demonstrate discriminant validity. Three popular software applications (WordPerfect, Lotus 1-2-3, and Harvard Graphics) were examined based on the expectation that they would all be rated highly on both scales. In this study a total of 73 users rated the three packages in terms of ease of use and usefulness. The results of the studies demonstrate reliable and valid scales for measurement of perceived ease of use and usefulness. In addition, the paper tests the relationships between ease of use, usefulness, and usage using structural equation modelling. The results of this model are consistent with previous research for Study 1, suggesting that usefulness is an important determinant of system use. For Study 2 the results are somewhat mixed, but indicate the importance of both ease of use and usefulness. Differences in conditions of usage are explored to explain these findings.", "title": "" }, { "docid": "neg:1840253_15", "text": "A novel dual-mode resonator with square-patch or corner-cut elements located at four corners of a conventional microstrip loop resonator is proposed. One of these patches or corner cuts is called the perturbation element, while the others are called reference elements. In the proposed design method, the transmission zeros are created or eliminated without sacrificing the passband response by changing the perturbation's size depending on the size of the reference elements. A simple transmission-line model is used to calculate the frequencies of the two transmission zeros. It is shown that the nature of the coupling between the degenerate modes determines the type of filter characteristic, whether it is Chebyshev or elliptic. Finally, two dual-mode microstrip bandpass filters are designed and realized using degenerate modes of the novel dual-mode resonator. The filters are evaluated by experiment and simulation with very good agreement.", "title": "" }, { "docid": "neg:1840253_16", "text": "Through the Freedom of Information and Protection of Privacy Act, we obtained design documents, called PAR Sheets, for slot machine games that are in use in Ontario, Canada. From our analysis of these PAR Sheets and observations from playing and watching others play these games, we report on the design of the structural characteristics of Ontario slots and their implications for problem gambling. We discuss characteristics such as speed of play, stop buttons, bonus modes, hand-pays, nudges, near misses, how some wins are in fact losses, and how two identical looking slot machines can have very different payback percentages. We then discuss how these characteristics can lead to multi-level reinforcement schedules (different reinforcement schedules for frequent and infrequent gamblers playing the same game) and how they may provide an illusion of control and contribute in other ways to irrational thinking, all of which are known risk factors for problem gambling.", "title": "" }, { "docid": "neg:1840253_17", "text": "Generating images of texture mapped geometry requires projecting surfaces onto a two-dimensional screen. If this projection involves perspective, then a division must be performed at each pixel of the projected surface in order to correctly calculate texture map coordinates. We show how a simple extension to perspective-comect texture mapping can be used to create various lighting effects, These include arbitrary projection of two-dimensional images onto geometry, realistic spotlights, and generation of shadows using shadow maps[ 10]. These effects are obtained in real time using hardware that performs correct texture mapping. CR", "title": "" }, { "docid": "neg:1840253_18", "text": "We show that the Learning with Errors (LWE) problem is classically at least as hard as standard worst-case lattice problems. Previously this was only known under quantum reductions.\n Our techniques capture the tradeoff between the dimension and the modulus of LWE instances, leading to a much better understanding of the landscape of the problem. The proof is inspired by techniques from several recent cryptographic constructions, most notably fully homomorphic encryption schemes.", "title": "" }, { "docid": "neg:1840253_19", "text": "We explain why we feel that the comparison betwen Common Lisp and Fortran in a recent article by Fateman et al. in this journal is not entirely fair.", "title": "" } ]
1840254
EX 2 : Exploration with Exemplar Models for Deep Reinforcement Learning
[ { "docid": "pos:1840254_0", "text": "Efficient exploration remains a major challenge for reinforcement learning (RL). Common dithering strategies for exploration, such as -greedy, do not carry out temporally-extended (or deep) exploration; this can lead to exponentially larger data requirements. However, most algorithms for statistically efficient RL are not computationally tractable in complex environments. Randomized value functions offer a promising approach to efficient exploration with generalization, but existing algorithms are not compatible with nonlinearly parameterized value functions. As a first step towards addressing such contexts we develop bootstrapped DQN. We demonstrate that bootstrapped DQN can combine deep exploration with deep neural networks for exponentially faster learning than any dithering strategy. In the Arcade Learning Environment bootstrapped DQN substantially improves learning speed and cumulative performance across most games.", "title": "" }, { "docid": "pos:1840254_1", "text": "Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as -greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent’s surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the k-step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques.", "title": "" } ]
[ { "docid": "neg:1840254_0", "text": "HTTP video streaming, employed by most of the video-sharing websites, allows users to control the video playback using, for example, pausing and switching the bit rate. These user-viewing activities can be used to mitigate the temporal structure impairments of the video quality. On the other hand, other activities, such as mouse movement, do not help reduce the impairment level. In this paper, we have performed subjective experiments to analyze user-viewing activities and correlate them with network path performance and user quality of experience. The results show that network measurement alone may miss important information about user dissatisfaction with the video quality. Moreover, video impairments can trigger user-viewing activities, notably pausing and reducing the screen size. By including the pause events into the prediction model, we can increase its explanatory power.", "title": "" }, { "docid": "neg:1840254_1", "text": "The basic mechanics of human locomotion are associated with vaulting over stiff legs in walking and rebounding on compliant legs in running. However, while rebounding legs well explain the stance dynamics of running, stiff legs cannot reproduce that of walking. With a simple bipedal spring-mass model, we show that not stiff but compliant legs are essential to obtain the basic walking mechanics; incorporating the double support as an essential part of the walking motion, the model reproduces the characteristic stance dynamics that result in the observed small vertical oscillation of the body and the observed out-of-phase changes in forward kinetic and gravitational potential energies. Exploring the parameter space of this model, we further show that it not only combines the basic dynamics of walking and running in one mechanical system, but also reveals these gaits to be just two out of the many solutions to legged locomotion offered by compliant leg behaviour and accessed by energy or speed.", "title": "" }, { "docid": "neg:1840254_2", "text": "This paper presents the design, numerical analysis and measurements of a planar bypass balun that provides 1:4 impedance transformations between the unbalanced microstrip (MS) and balanced coplanar strip line (CPS). This type of balun is suitable for operation with small antennas fed with balanced a (parallel wire) transmission line, i.e. wire, planar dipoles and loop antennas. The balun has been applied to textile CPS-fed loop antennas, designed for operations below 1GHz. The performance of a loop antenna with the balun is described, as well as an idea of incorporating rigid circuits with flexible textile structures.", "title": "" }, { "docid": "neg:1840254_3", "text": "This paper presents a novel approach for haptic object recognition with an anthropomorphic robot hand. Firstly, passive degrees of freedom are introduced to the tactile sensor system of the robot hand. This allows the planar tactile sensor patches to optimally adjust themselves to the object's surface and to acquire additional sensor information for shape reconstruction. Secondly, this paper presents an approach to classify an object directly from the haptic sensor data acquired by a palpation sequence with the robot hand - without building a 3d-model of the object. Therefore, a finite set of essential finger positions and tactile contact patterns are identified which can be used to describe a single palpation step. A palpation sequence can then be merged into a simple statistical description of the object and finally be classified. The proposed approach for haptic object recognition and the new tactile sensor system are evaluated with an anthropomorphic robot hand.", "title": "" }, { "docid": "neg:1840254_4", "text": "In this article we analyze the response of Time of Flight cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. Time of Flight sensors are sensitive to ambient light and have low resolution but deliver high frame rate accurate depth data under suitable conditions. We introduce some metrics for performance evaluation over a small region of interest. Based on these metrics, we analyze and compare depth imaging of leaf under indoor (room) and outdoor (shadow and sunlight) conditions by varying exposures of the sensors. Performance of three different time of flight cameras (PMD CamBoard, PMD CamCube and SwissRanger SR4000) is compared against selected stereo-correspondence algorithms (local correlation and graph cuts). PMD CamCube has better cancellation of sunlight, followed by CamBoard, while SwissRanger SR4000 performs poorly under sunlight. stereo vision is more robust to ambient illumination and provides high resolution depth data but it is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves but is computationally much more expensive as compared to local correlation. Finally, we propose a method to increase the dynamic range of the ToF cameras for a scene involving both shadow and sunlight exposures at the same time using camera flags (PMD) or confidence matrix (SwissRanger).", "title": "" }, { "docid": "neg:1840254_5", "text": "Most recent approaches to monocular 3D pose estimation rely on Deep Learning. They either train a Convolutional Neural Network to directly regress from an image to a 3D pose, which ignores the dependencies between human joints, or model these dependencies via a max-margin structured learning framework, which involves a high computational cost at inference time. In this paper, we introduce a Deep Learning regression architecture for structured prediction of 3D human pose from monocular images or 2D joint location heatmaps that relies on an overcomplete autoencoder to learn a high-dimensional latent pose representation and accounts for joint dependencies. We further propose an efficient Long Short-Term Memory network to enforce temporal consistency on 3D pose predictions. We demonstrate that our approach achieves state-of-the-art performance both in terms of structure preservation and prediction accuracy on standard 3D human pose estimation benchmarks.", "title": "" }, { "docid": "neg:1840254_6", "text": "During the early phase of replication, HIV reverse transcribes its RNA and crosses the nuclear envelope while escaping host antiviral defenses. The host factor Cyclophilin A (CypA) is essential for these steps and binds the HIV capsid; however, the mechanism underlying this effect remains elusive. Here, we identify related capsid mutants in HIV-1, HIV-2, and SIVmac that are restricted by CypA. This antiviral restriction of mutated viruses is conserved across species and prevents nuclear import of the viral cDNA. Importantly, the inner nuclear envelope protein SUN2 is required for the antiviral activity of CypA. We show that wild-type HIV exploits SUN2 in primary CD4+ T cells as an essential host factor that is required for the positive effects of CypA on reverse transcription and infection. Altogether, these results establish essential CypA-dependent functions of SUN2 in HIV infection at the nuclear envelope.", "title": "" }, { "docid": "neg:1840254_7", "text": "As the visualization field matures, an increasing number of general toolkits are developed to cover a broad range of applications. However, no general tool can incorporate the latest capabilities for all possible applications, nor can the user interfaces and workflows be easily adjusted to accommodate all user communities. As a result, users will often chose either substandard solutions presented in familiar, customized tools or assemble a patchwork of individual applications glued through ad-hoc scripts and extensive, manual intervention. Instead, we need the ability to easily and rapidly assemble the best-in-task tools into custom interfaces and workflows to optimally serve any given application community. Unfortunately, creating such meta-applications at the API or SDK level is difficult, time consuming, and often infeasible due to the sheer variety of data models, design philosophies, limits in functionality, and the use of closed commercial systems. In this paper, we present the ManyVis framework which enables custom solutions to be built both rapidly and simply by allowing coordination and communication across existing unrelated applications. ManyVis allows users to combine software tools with complementary characteristics into one virtual application driven by a single, custom-designed interface.", "title": "" }, { "docid": "neg:1840254_8", "text": "We consider the problem of content-based spam filtering for short text messages that arise in three contexts: mobile (SMS) communication, blog comments, and email summary information such as might be displayed by a low-bandwidth client. Short messages often consist of only a few words, and therefore present a challenge to traditional bag-of-words based spam filters. Using three corpora of short messages and message fields derived from real SMS, blog, and spam messages, we evaluate feature-based and compression-model-based spam filters. We observe that bag-of-words filters can be improved substantially using different features, while compression-model filters perform quite well as-is. We conclude that content filtering for short messages is surprisingly effective.", "title": "" }, { "docid": "neg:1840254_9", "text": "The success of Android phones makes them a prominent target for malicious software, in particular since the Android permission system turned out to be inadequate to protect the user against security and privacy threats. This work presents AppGuard, a powerful and flexible system for the enforcement of user-customizable security policies on untrusted Android applications. AppGuard does not require any changes to a smartphone’s firmware or root access. Our system offers complete mediation of security-relevant methods based on callee-site inline reference monitoring. We demonstrate the general applicability of AppGuard by several case studies, e.g., removing permissions from overly curious apps as well as defending against several recent real-world attacks on Android phones. Our technique exhibits very little space and runtime overhead. AppGuard is publicly available, has been invited to the Samsung Apps market, and has had more than 500,000 downloads so far.", "title": "" }, { "docid": "neg:1840254_10", "text": "The goal of scattered data interpolation techniques is to construct a (typically smooth) function from a set of unorganized samples. These techniques have a wide range of applications in computer graphics. For instance they can be used to model a surface from a set of sparse samples, to reconstruct a BRDF from a set of measurements, to interpolate motion capture data, or to compute the physical properties of a fluid. This course will survey and compare scattered interpolation algorithms and describe their applications in computer graphics. Although the course is focused on applying these techniques, we will introduce some of the underlying mathematical theory and briefly mention numerical considerations.", "title": "" }, { "docid": "neg:1840254_11", "text": "Lately, fire outbreaks are common issues and its occurrence could cause severe damage toward nature and human properties. Thus, fire detection has been an important issue to protect human life and property and has increases in recent years. This paper focusing on the algorithm of fire detection using image processing techniques i.e. colour pixel classification. This Fire detection system does not require any special type of sensors and it has the ability to monitor large area and depending on the quality of camera used. The objective of this research is to design a methodology for fire detection using image as input. The propose algorithm is using colour pixel classification. This system used image enhancement technique, RGB and YCbCr colour models with given conditions to separate fire pixel from background and isolates luminance from chrominance contrasted from original image to detect fire. The propose system achieved 90% fire detection rate on average.", "title": "" }, { "docid": "neg:1840254_12", "text": "In the last five years, deep learning methods and particularly Convolutional Neural Networks (CNNs) have exhibited excellent accuracies in many pattern classification problems. Most of the state-of-the-art models apply data-augmentation techniques at the training stage. This paper provides a brief tutorial on data preprocessing and shows its benefits by using the competitive MNIST handwritten digits classification problem. We show and analyze the impact of different preprocessing techniques on the performance of three CNNs, LeNet, Network3 and DropConnect, together with their ensembles. The analyzed transformations are, centering, elastic deformation, translation, rotation and different combinations of them. Our analysis demonstrates that data-preprocessing techniques, such as the combination of elastic deformation and rotation, together with ensembles have a high potential to further improve the state-of-the-art accuracy in MNIST classification.", "title": "" }, { "docid": "neg:1840254_13", "text": "We propose to unify a variety of existing semantic classification tasks, such as semantic role labeling, anaphora resolution, and paraphrase detection, under the heading of Recognizing Textual Entailment (RTE). We present a general strategy to automatically generate one or more sentential hypotheses based on an input sentence and pre-existing manual semantic annotations. The resulting suite of datasets enables us to probe a statistical RTE model’s performance on different aspects of semantics. We demonstrate the value of this approach by investigating the behavior of a popular neural network RTE model.", "title": "" }, { "docid": "neg:1840254_14", "text": "Considering the shift of museums towards digital experiences that can satiate the interests of their young audiences, we suggest an integrated schema for socially engaging large visitor groups. As a means to present our position we propose a framework for audience involvement with complex educational material, combining serious games and virtual environments along with a theory of contextual learning in museums. We describe the research methodology for validating our framework, including the description of a testbed application and results from existing studies with children in schools, summer camps, and a museum. Such findings serve both as evidence for the applicability of our position and as a guidepost for the direction we should move to foster richer social engagement of young crowds. Author", "title": "" }, { "docid": "neg:1840254_15", "text": "Human aesthetic preference in the visual domain is reviewed from definitional, methodological, empirical, and theoretical perspectives. Aesthetic science is distinguished from the perception of art and from philosophical treatments of aesthetics. The strengths and weaknesses of important behavioral techniques are presented and discussed, including two-alternative forced-choice, rank order, subjective rating, production/adjustment, indirect, and other tasks. Major findings are reviewed about preferences for colors (single colors, color combinations, and color harmony), spatial structure (low-level spatial properties, shape properties, and spatial composition within a frame), and individual differences in both color and spatial structure. Major theoretical accounts of aesthetic response are outlined and evaluated, including explanations in terms of mere exposure effects, arousal dynamics, categorical prototypes, ecological factors, perceptual and conceptual fluency, and the interaction of multiple components. The results of the review support the conclusion that aesthetic response can be studied rigorously and meaningfully within the framework of scientific psychology.", "title": "" }, { "docid": "neg:1840254_16", "text": "Big data is flowing into every area of our life, professional and personal. Big data is defined as datasets whose size is beyond the ability of typical software tools to capture, store, manage and analyze, due to the time and memory complexity. Velocity is one of the main properties of big data. In this demo, we present SAMOA (Scalable Advanced Massive Online Analysis), an open-source platform for mining big data streams. It provides a collection of distributed streaming algorithms for the most common data mining and machine learning tasks such as classification, clustering, and regression, as well as programming abstractions to develop new algorithms. It features a pluggable architecture that allows it to run on several distributed stream processing engines such as Storm, S4, and Samza. SAMOA is written in Java and is available at http://samoa-project.net under the Apache Software License version 2.0.", "title": "" }, { "docid": "neg:1840254_17", "text": "Current approaches for visual--inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time; this problem is further emphasized by the fact that inertial measurements come at high rate, hence, leading to the fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a preintegration theory that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the maximum a posteriori state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a posteriori bias correction in analytic form. The second contribution is to show that the preintegrated inertial measurement unit model can be seamlessly integrated into a visual--inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3-D points, further accelerating the computation. We perform an extensive evaluation of our monocular VIO pipeline on real and simulated datasets. The results confirm that our modeling effort leads to an accurate state estimation in real time, outperforming state-of-the-art approaches.", "title": "" }, { "docid": "neg:1840254_18", "text": "We present novel method for image-text multi-modal representation learning. In our knowledge, this work is the first approach of applying adversarial learning concept to multi-modal learning and not exploiting image-text pair information to learn multi-modal feature. We only use category information in contrast with most previous methods using image-text pair information for multi-modal embedding. In this paper, we show that multi-modal feature can be achieved without image-text pair information and our method makes more similar distribution with image and text in multi-modal feature space than other methods which use image-text pair information. And we show our multi-modal feature has universal semantic information, even though it was trained for category prediction. Our model is end-to-end backpropagation, intuitive and easily extended to other multimodal learning work.", "title": "" }, { "docid": "neg:1840254_19", "text": "The cost of moving and storing data is still a fundamental concern for computer architects. Inefficient handling of data can be attributed to conventional architectures being oblivious to the nature of the values that these data bits carry. We observe the phenomenon of spatio-value similarity, where data elements that are approximately similar in value exhibit spatial regularity in memory. This is inherent to 1) the data values of real-world applications, and 2) the way we store data structures in memory. We propose the Bunker Cache, a design that maps similar data to the same cache storage location based solely on their memory address, sacrificing some application quality loss for greater efficiency. The Bunker Cache enables performance gains (ranging from 1.08x to 1.19x) via reduced cache misses and energy savings (ranging from 1.18x to 1.39x) via reduced off-chip memory accesses and lower cache storage requirements. The Bunker Cache requires only modest changes to cache indexing hardware, integrating easily into commodity systems.", "title": "" } ]
1840255
Deep Convolutional Neural Networks for Spatiotemporal Crime Prediction
[ { "docid": "pos:1840255_0", "text": "Twitter is used extensively in the United States as well as globally, creating many opportunities to augment decision support systems with Twitterdriven predictive analytics. Twitter is an ideal data source for decision support: its users, who number in the millions, publicly discuss events, emotions, and innumerable other topics; its content is authored and distributed in real time at no charge; and individual messages (also known as tweets) are often tagged with precise spatial and temporal coordinates. This article presents research investigating the use of spatiotemporally tagged tweets for crime prediction. We use Twitter-specific linguistic analysis and statistical topic modeling to automatically identify discussion topics across a major city in the United States. We then incorporate these topics into a crime prediction model and show that, for 19 of the 25 crime types we studied, the addition of Twitter data improves crime prediction performance versus a standard approach based on kernel density estimation. We identify a number of performance bottlenecks that could impact the use of Twitter in an actual decision support system. We also point out important areas of future work for this research, including deeper semantic analysis of message con∗Email address: msg8u@virginia.edu; Tel.: 1+ 434 924 5397; Fax: 1+ 434 982 2972 Preprint submitted to Decision Support Systems January 14, 2014 tent, temporal modeling, and incorporation of auxiliary data sources. This research has implications specifically for criminal justice decision makers in charge of resource allocation for crime prevention. More generally, this research has implications for decision makers concerned with geographic spaces occupied by Twitter-using individuals.", "title": "" }, { "docid": "pos:1840255_1", "text": "Short texts usually encounter data sparsity and ambiguity problems in representations for their lack of context. In this paper, we propose a novel method to model short texts based on semantic clustering and convolutional neural network. Particularly, we first discover semantic cliques in embedding spaces by a fast clustering algorithm. Then, multi-scale semantic units are detected under the supervision of semantic cliques, which introduce useful external knowledge for short texts. These meaningful semantic units are combined and fed into convolutional layer, followed by max-pooling operation. Experimental results on two open benchmarks validate the effectiveness of the proposed method.", "title": "" } ]
[ { "docid": "neg:1840255_0", "text": "Probabilistic inference algorithms for find­ ing the most probable explanation, the max­ imum aposteriori hypothesis, and the maxi­ mum expected utility and for updating belief are reformulated as an elimination-type al­ gorithm called bucket elimination. This em­ phasizes the principle common to many of the algorithms appearing in that literature and clarifies their relationship to nonserial dynamic programming algorithms. We also present a general way of combining condition­ ing and elimination within this framework. Bounds on complexity are given for all the al­ gorithms as a function of the problem's struc­ ture.", "title": "" }, { "docid": "neg:1840255_1", "text": "RGB-D cameras provide both color images and per-pixel depth stimates. The richness of this data and the recent development of low-c ost sensors have combined to present an attractive opportunity for mobile robot ics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging resu lts from recent stateof-the-art algorithms and hardware, our system enables 3D fl ight in cluttered environments using only onboard sensor data. All computation an d se sing required for local position control are performed onboard the vehicle, r educing the dependence on unreliable wireless links. However, even with accurate 3 D sensing and position estimation, some parts of the environment have more percept ual structure than others, leading to state estimates that vary in accuracy across the environment. If the vehicle plans a path without regard to how well it can localiz e itself along that path, it runs the risk of becoming lost or worse. We show how the Belief Roadmap (BRM) algorithm (Prentice and Roy, 2009), a belief space extensio n of the Probabilistic Roadmap algorithm, can be used to plan vehicle trajectories that incorporate the sensing model of the RGB-D camera. We evaluate the effective ness of our system for controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its li mitations. Abraham Bachrach and Samuel Prentice contributed equally to this work. Abraham Bachrach, Samuel Prentice, Ruijie He, Albert Huang a nd Nicholas Roy Computer Science and Artificial Intelligence Laboratory, Ma ss chusetts Institute of Technology, Cambridge, MA 02139. e-mail: abachrac, ruijie, albert, prentice, nickroy@mit.ed u Peter Henry, Michael Krainin and Dieter Fox University of Washington, Department of Computer Science & Engi neering, Seattle, WA. e-mail: peter, mkrainin, fox@cs.washington.edu. Daniel Maturana The Robotics Institute, Carnegie Mellon University, Pittsbur gh, PA. e-mail: dimatura@cmu.edu", "title": "" }, { "docid": "neg:1840255_2", "text": "This paper reports a SiC-based solid-state circuit breaker (SSCB) with an adjustable current-time (I-t) tripping profile for both ultrafast short circuit protection and overload protection. The tripping time ranges from 0.5 microsecond to 10 seconds for a fault current ranging from 0.8X to 10X of the nominal current. The I-t tripping profile, adjustable by choosing different resistance values in the analog control circuit, can help avoid nuisance tripping of the SSCB due to inrush transient current. The maximum thermal capability of the 1200V SiC JFET static switch in the SSCB is investigated to set a practical thermal limit for the I-t tripping profile. Furthermore, a low fault current ‘blind zone’ limitation of the prior SSCB design is discussed and a new circuit solution is proposed to operate the SSCB even under a low fault current condition. Both simulation and experimental results are reported.", "title": "" }, { "docid": "neg:1840255_3", "text": "We introduce instancewise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.", "title": "" }, { "docid": "neg:1840255_4", "text": "Understanding the behaviors of a software system is very important for performing daily system maintenance tasks. In practice, one way to gain knowledge about the runtime behavior of a system is to manually analyze system logs collected during the system executions. With the increasing scale and complexity of software systems, it has become challenging for system operators to manually analyze system logs. To address these challenges, in this paper, we propose a new approach for contextual analysis of system logs for understanding a system's behaviors. In particular, we first use execution patterns to represent execution structures reflected by a sequence of system logs, and propose an algorithm to mine execution patterns from the program logs. The mined execution patterns correspond to different execution paths of the system. Based on these execution patterns, our approach further learns essential contextual factors (e.g., the occurrences of specific program logs with specific parameter values) that cause a specific branch or path to be executed by the system. The mining and learning results can help system operators to understand a software system's runtime execution logic and behaviors during various tasks such as system problem diagnosis. We demonstrate the feasibility of our approach upon two real-world software systems (Hadoop and Ethereal).", "title": "" }, { "docid": "neg:1840255_5", "text": "Metamorphic testing (MT) is an effective methodology for testing those so-called ``non-testable'' programs (e.g., scientific programs), where it is sometimes very difficult for testers to know whether the outputs are correct. In metamorphic testing, metamorphic relations (MRs) (which specify how particular changes to the input of the program under test would change the output) play an essential role. However, testers may typically have to obtain MRs manually.\n In this paper, we propose a search-based approach to automatic inference of polynomial MRs for a program under test. In particular, we use a set of parameters to represent a particular class of MRs, which we refer to as polynomial MRs, and turn the problem of inferring MRs into a problem of searching for suitable values of the parameters. We then dynamically analyze multiple executions of the program, and use particle swarm optimization to solve the search problem. To improve the quality of inferred MRs, we further use MR filtering to remove some inferred MRs.\n We also conducted three empirical studies to evaluate our approach using four scientific libraries (including 189 scientific functions). From our empirical results, our approach is able to infer many high-quality MRs in acceptable time (i.e., from 9.87 seconds to 1231.16 seconds), which are effective in detecting faults with no false detection.", "title": "" }, { "docid": "neg:1840255_6", "text": "In this work, a new base station antenna is proposed. Two separate frequency bands with separate radiating elements are used in each band. The frequency band separation ratio is about 1.3:1. These elements are arranged with different spacing (wider spacing for the lower frequency band, and narrower spacing for the higher frequency band). Isolation between bands inherently exists in this approach. This avoids the grating lobe effect, and mitigates the beam narrowing (dispersion) seen with fixed element spacing covering the whole wide bandwidth. A new low-profile cross dipole is designed, which is integrated in the array with an EBG/AMC structure for reducing the size of low band elements and decreasing coupling at high band.", "title": "" }, { "docid": "neg:1840255_7", "text": "A single-fed CP stacked patch antenna is proposed to cover all the GPS bands, including E5a/E5b for the Galileo system. The small aperture size (lambda/8 at the L5 band) and the single feeding property make this antenna a promising element for small GPS arrays. The design procedures and antenna performances are presented, and issues related to coupling between array elements are discussed.", "title": "" }, { "docid": "neg:1840255_8", "text": "This is a pilot study of the use of “Flash cookies” by popular websites. We find that more than 50% of the sites in our sample are using Flash cookies to store information about the user. Some are using it to “respawn” or re-instantiate HTTP cookies deleted by the user. Flash cookies often share the same values as HTTP cookies, and are even used on government websites to assign unique values to users. Privacy policies rarely disclose the presence of Flash cookies, and user controls for effectuating privacy preferences are", "title": "" }, { "docid": "neg:1840255_9", "text": "This work describes the aerodynamic characteristic for aircraft wing model with and without bird feather like winglet. The aerofoil used to construct the whole structure is NACA 653-218 Rectangular wing and this aerofoil has been used to compare the result with previous research using winglet. The model of the rectangular wing with bird feather like winglet has been fabricated using polystyrene before design using CATIA P3 V5R13 software and finally fabricated in wood. The experimental analysis for the aerodynamic characteristic for rectangular wing without winglet, wing with horizontal winglet and wing with 60 degree inclination winglet for Reynolds number 1.66×10, 2.08×10 and 2.50×10 have been carried out in open loop low speed wind tunnel at the Aerodynamics laboratory in Universiti Putra Malaysia. The experimental result shows 25-30 % reduction in drag coefficient and 10-20 % increase in lift coefficient by using bird feather like winglet for angle of attack of 8 degree. Keywords—Aerofoil, Wind tunnel, Winglet, Drag Coefficient.", "title": "" }, { "docid": "neg:1840255_10", "text": "An important prerequisite for successful usage of computer systems and other interactive technology is a basic understanding of the symbols and interaction patterns used in them. This aspect of the broader construct “computer literacy” is used as indicator in the computer literacy scale, which proved to be an economical, reliable and valid instrument for the assessment of computer literacy in older adults.", "title": "" }, { "docid": "neg:1840255_11", "text": "Bitcoin, as well as many of its successors, require the whole transaction record to be reliably acquired by all nodes to prevent double-spending. Recently, many blockchains have been proposed to achieve scale-out throughput by letting nodes only acquire a fraction of the whole transaction set. However, these schemes, e.g., sharding and off-chain techniques, suffer from a degradation in decentralization or the capacity of fault tolerance. In this paper, we show that the complete set of transactions is not a necessity for the prevention of double-spending if the properties of value transfers is fully explored. In other words, we show that a value-transfer ledger like Bitcoin has the potential to scale-out by its nature without sacrificing security or decentralization. Firstly, we give a formal definition for the value-transfer ledger and its distinct features from a generic database. Then, we introduce the blockchain structure with a shared main chain for consensus and an individual chain for each node for recording transactions. A locally executable validation scheme is proposed with uncompromising validity and consistency. A beneficial consequence of our design is that nodes will spontaneously try to reduce their transmission cost by only providing the transactions needed to show that their transactions are not double spend. As a result, the network is sharded as each node only acquires part of the transaction record and a scale-out throughput could be achieved, which we call \"spontaneous sharding\".", "title": "" }, { "docid": "neg:1840255_12", "text": "The designations employed and the presentation of material in this information product do not imply the expression of any opinion whatsoever on the part of the Food and Agriculture Organization of the United Nations (FAO) concerning the legal or development status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. The mention of specific companies or products of manufacturers, whether or not these have been patented, does not imply that these have been endorsed or recommended by FAO in preference to others of a similar nature that are not mentioned. The views expressed in this information product are those of the author(s) and do not necessarily reflect the views of FAO.", "title": "" }, { "docid": "neg:1840255_13", "text": "Contrary to the classical (time-triggered) principle that calculates the control signal in a periodic fashion, an event-driven control is computed and updated only when a certain condition is satisfied. This notably enables to save computations in the control task while ensuring equivalent performance. In this paper, we develop and implement such strategies to control a nonlinear and unstable system, that is the inverted pendulum. We are first interested on the stabilization of the pendulum near its inverted position and propose an event-based control approach. This notably demonstrates the efficiency of the event-based scheme even in the case where the system has to be actively actuated to remain upright. We then study the swinging of the pendulum up to the desired position and propose a low-cost control law based on an energy function. The switch between both strategies is also analyzed. A real-time experimentation is realized and shows that a reduction of about 98% and 50% of samples less than the classical scheme is achieved for the swing up and stabilization parts respectively.", "title": "" }, { "docid": "neg:1840255_14", "text": "This paper presents a framework to model the semantic representation of binary relations produced by open information extraction systems. For each binary relation, we infer a set of preferred types on the two arguments simultaneously, and generate a ranked list of type pairs which we call schemas. All inferred types are drawn from the Freebase type taxonomy, which are human readable. Our system collects 171,168 binary relations from ReVerb, and is able to produce top-ranking relation schemas with a mean reciprocal rank of 0.337.", "title": "" }, { "docid": "neg:1840255_15", "text": "This articleii presents the results of video-based Human Robot Interaction (HRI) trials which investigated people’s perceptions of different robot appearances and associated attention-seeking features and behaviors displayed by robots with different appearance and behaviors. The HRI trials studied the participants’ preferences for various features of robot appearance and behavior, as well as their personality attributions towards the robots compared to their own personalities. Overall, participants tended to prefer robots with more human-like appearance and attributes. However, systematic individual differences in the dynamic appearance ratings are not consistent with a universal effect. Introverts and participants with lower emotional stability tended to prefer the mechanical looking appearance to a greater degree than other participants. It is also shown that it is possible to rate individual elements of a particular robot’s behavior and then assess the contribution, or otherwise, of that element to the overall perception of the robot by people. Relating participants’ dynamic appearance ratings of individual robots to independent static appearance ratings provided evidence that could be taken to support a portion of the left hand side of Mori’s theoretically proposed ‘uncanny valley’ diagram. Suggestions for future work are outlined. I.INTRODUCTION Robots that are currently commercially available for use in a domestic environment and which have human interaction features are often orientated towards toy or entertainment functions. In the future, a robot companion which is to find a more generally useful place within a human oriented domestic environment, and thus sharing a private home with a person or family, must satisfy two main criteria (Dautenhahn et al. (2005); Syrdal et al. (2006); Woods et al. (2007)): It must be able to perform a range of useful tasks or functions. It must carry out these tasks or functions in a manner that is socially acceptable and comfortable for people it shares the environment with and/or it interacts with. The technical challenges in getting a robot to perform useful tasks are extremely difficult, and many researchers are currently researching into the technical capabilities that will be required to perform useful functions in a human centered environment including navigation, manipulation, vision, speech, sensing, safety, system integration and planning. The second criteria is arguably equally important, because if the robot does not exhibit socially acceptable behavior, then people may reject the robot if it is annoying, irritating, unsettling or frightening to human users. Therefore: How can a robot behave in a socially acceptable manner? Research into social robots is generally contained within the rapidly developing field of Human-Robot Interaction (HRI). For an overview of socially interactive robots (robots designed to interact with humans in a social way) see Fong et al. (2003). Relevant examples of studies and investigations into human reactions to robots include: Goetz et al. (2003) where issues of robot appearance, behavior and task domains were investigated, and Severinson-Eklundh et al. (2003) which documents a longitudinal HRI trial investigating the human perspective of using a robotic assistant over several weeks . Khan (1998), Scopelliti et al. (2004) and Dautenhahn et al. (2005) have surveyed peoples’ views of domestic robots in order to aid the development of an initial design specification for domestic or servant robots. Kanda et al. (2004) presents results from a longitudinal HRI trial with a robot as a social partner and peer tutor aiding children learning English.", "title": "" }, { "docid": "neg:1840255_16", "text": "We live in a world with a population of more than 7.1 Billion, have we ever imagine how many Leaders do we have? Yes, most of us are followers; we live in a world where we follow what have been commanded. The intension of this paper is to equip everyone with some knowledge to know how we can identify who leaders are, are you one of them, and how can we help our-selves and other develop leadership qualities. The Model highlights various traits which are very necessary for leadership. This paper have been investigate and put together after probing almost 30 other research papers. The Principal result we arrived on was that the major/ essential traits which are identified in a Leader are Honesty, Integrity, Drive (Achievement, Motivation, Ambition, Energy, Tenacity and Initiative), Self Confidence, Vision and Cognitive Ability. The Key finding also says that the people with such qualities are not necessary to be in politics, but they are from various walks of life such as major organization, different culture, background, education and ethnicities. Also we found out that just possessing of such traits alone does not guarantee one leadership success as evidence shows that effective leaders are different in nature from most of the other people in certain key respects. So, let us go through the paper to enhance out our mental abilities to search for the Leaders out there.", "title": "" }, { "docid": "neg:1840255_17", "text": "The purpose of image enhancement is to process an acquired image for better contrast and visibility of features of interest for visual examination as well as subsequent computer-aided analysis and diagnosis. Therefore, we have proposed an algorithm for medical images enhancement. In the study, we used top-hat transform, contrast limited histogram equalization and anisotropic diffusion filter methods. The system results are quite satisfactory for many different medical images like lung, breast, brain, knee and etc.", "title": "" }, { "docid": "neg:1840255_18", "text": "BACKGROUND\nNovel interventions for treatment-resistant depression (TRD) in adolescents are urgently needed. Ketamine has been studied in adults with TRD, but little information is available for adolescents. This study investigated efficacy and tolerability of intravenous ketamine in adolescents with TRD, and explored clinical response predictors.\n\n\nMETHODS\nAdolescents, 12-18 years of age, with TRD (failure to respond to two previous antidepressant trials) were administered six ketamine (0.5 mg/kg) infusions over 2 weeks. Clinical response was defined as a 50% decrease in Children's Depression Rating Scale-Revised (CDRS-R); remission was CDRS-R score ≤28. Tolerability assessment included monitoring vital signs and dissociative symptoms using the Clinician-Administered Dissociative States Scale (CADSS).\n\n\nRESULTS\nThirteen participants (mean age 16.9 years, range 14.5-18.8 years, eight biologically male) completed the protocol. Average decrease in CDRS-R was 42.5% (p = 0.0004). Five (38%) adolescents met criteria for clinical response. Three responders showed sustained remission at 6-week follow-up; relapse occurred within 2 weeks for the other two responders. Ketamine infusions were generally well tolerated; dissociative symptoms and hemodynamic symptoms were transient. Higher dose was a significant predictor of treatment response.\n\n\nCONCLUSIONS\nThese results demonstrate the potential role for ketamine in treating adolescents with TRD. Limitations include the open-label design and small sample; future research addressing these issues are needed to confirm these results. Additionally, evidence suggested a dose-response relationship; future studies are needed to optimize dose. Finally, questions remain regarding the long-term safety of ketamine as a depression treatment; more information is needed before broader clinical use.", "title": "" }, { "docid": "neg:1840255_19", "text": "Theoretical analysis of the connection between taxation and risktaking has mainly been concerned with the effect of taxes on portfolio decisions of consumers, Mossin (1968b) and Stiglitz (1969). However, there are some problems which are not naturally classified under this heading and which, although of considerable practical interest, have been left out of the theoretical discussions. One such problem is tax evasion. This takes many forms, and one can hardly hope to give a completely general analysis of all these. Our objective in this paper is therefore the more limited one of analyzing the individual taxpayer’s decision on whether and to what extent to avoid taxes by deliberate underreporting. On the one hand our approach is related to the studies of economics of criminal activity, as e.g. in the papers by Becker ( 1968) and by Tulkens and Jacquemin (197 1). On the other hand it is related to the analysis of optimal portfolio and insurance policies in the economics of uncertainty, as in the work by Arrow ( 1970), Mossin ( 1968a) and several others. We shall start by considering a simple static model where this decision is the only one with which the individual is concerned, so that we ignore the interrelationships that probably exist with other types of economic choices. After a detailed study of this simple case (sections", "title": "" } ]
1840256
Agent-based decision-making process in airport ground handling management
[ { "docid": "pos:1840256_0", "text": "Manufacturing has faced significant changes during the last years, namely the move from a local economy towards a global and competitive economy, with markets demanding for highly customized products of high quality at lower costs, and with short life cycles. In this environment, manufacturing enterprises, to remain competitive, must respond closely to customer demands by improving their flexibility and agility, while maintaining their productivity and quality. Dynamic response to emergence is becoming a key issue in manufacturing field because traditional manufacturing control systems are built upon rigid control architectures, which cannot respond efficiently and effectively to dynamic change. In these circumstances, the current challenge is to develop manufacturing control systems that exhibit intelligence, robustness and adaptation to the environment changes and disturbances. The introduction of multi-agent systems and holonic manufacturing systems paradigms addresses these requirements, bringing the advantages of modularity, decentralization, autonomy, scalability and reusability. This paper surveys the literature in manufacturing control systems using distributed artificial intelligence techniques, namely multi-agent systems and holonic manufacturing systems principles. The paper also discusses the reasons for the weak adoption of these approaches by industry and points out the challenges and research opportunities for the future. & 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "pos:1840256_1", "text": "Effective supply chain design calls for robust analytical models and design tools. Previous works in this area are mostly Operation Research oriented without considering manufacturing aspects. Recently, researchers have begun to realize that the decision and integration effort in supply chain design should be driven by the manufactured product, specifically, product characteristics and product life cycle. In addition, decision-making processes should be guided by a comprehensive set of performance metrics. In this paper, we relate product characteristics to supply chain strategy and adopt supply chain operations reference (SCOR) model level I performance metrics as the decision criteria. An integrated analytic hierarchy process (AHP) and preemptive goal programming (PGP) based multi-criteria decision-making methodology is then developed to take into account both qualitative and quantitative factors in supplier selection. While the AHP process matches product characteristics with supplier characteristics (using supplier ratings derived from pairwise comparisons) to qualitatively determine supply chain strategy, PGP mathematically determines the optimal order quantity from the chosen suppliers. Since PGP uses AHP ratings as input, the variations of pairwise comparisons in AHP will influence the final order quantity. Therefore, users of this methodology should put greater emphasis on the AHP progress to ensure the accuracy of supplier ratings. r 2003 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "neg:1840256_0", "text": "We report on the implementation of a color-capable sub-pixel resolving optofluidic microscope based on the pixel super-resolution algorithm and sequential RGB illumination, for low-cost on-chip color imaging of biological samples with sub-cellular resolution.", "title": "" }, { "docid": "neg:1840256_1", "text": "Fluorescent carbon nanoparticles or carbon quantum dots (CQDs) are a new class of carbon nanomaterials that have emerged recently and have garnered much interest as potential competitors to conventional semiconductor quantum dots. In addition to their comparable optical properties, CQDs have the desired advantages of low toxicity, environmental friendliness low cost and simple synthetic routes. Moreover, surface passivation and functionalization of CQDs allow for the control of their physicochemical properties. Since their discovery, CQDs have found many applications in the fields of chemical sensing, biosensing, bioimaging, nanomedicine, photocatalysis and electrocatalysis. This article reviews the progress in the research and development of CQDs with an emphasis on their synthesis, functionalization and technical applications along with some discussion on challenges and perspectives in this exciting and promising field.", "title": "" }, { "docid": "neg:1840256_2", "text": "This Working Paper should not be reported as representing the views of the IMF. The views expressed in this Working Paper are those of the author(s) and do not necessarily represent those of the IMF or IMF policy. Working Papers describe research in progress by the author(s) and are published to elicit comments and to further debate. Using a dataset which breaks down FDI flows into primary, secondary and tertiary sector investments and a GMM dynamic approach to address concerns about endogeneity, the paper analyzes various macroeconomic, developmental, and institutional/qualitative determinants of FDI in a sample of emerging market and developed economies. While FDI flows into the primary sector show little dependence on any of these variables, secondary and tertiary sector investments are affected in different ways by countries’ income levels and exchange rate valuation, as well as development indicators such as financial depth and school enrollment, and institutional factors such as judicial independence and labor market flexibility. Finally, we find that the effect of these factors often differs between advanced and emerging economies. JEL Classification Numbers: F21, F23", "title": "" }, { "docid": "neg:1840256_3", "text": "This paper shows how to use modular Marx multilevel converter diode (M3CD) modules to apply unipolar or bipolar high-voltage pulses for pulsed power applications. The M3CD cells allow the assembly of a multilevel converter without needing complex algorithms and parameter measurement to balance the capacitor voltages. This paper also explains how to supply all the modular cells in order to ensure galvanic isolation between control circuits and power circuits. The experimental results for a generator with seven levels, and unipolar and bipolar pulses into resistive, inductive, and capacitive loads are presented.", "title": "" }, { "docid": "neg:1840256_4", "text": "In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples", "title": "" }, { "docid": "neg:1840256_5", "text": "UNLABELLED\nThe aim of this prospective study was to assess the predictive value of (18)F-FDG PET/CT imaging for pathologic response to neoadjuvant chemotherapy (NACT) and outcome in inflammatory breast cancer (IBC) patients.\n\n\nMETHODS\nTwenty-three consecutive patients (51 y ± 12.7) with newly diagnosed IBC, assessed by PET/CT at baseline (PET1), after the third course of NACT (PET2), and before surgery (PET3), were included. The patients were divided into 2 groups according to pathologic response as assessed by the Sataloff classification: pathologic complete response for complete responders (stage TA and NA or NB) and non-pathologic complete response for noncomplete responders (not stage A for tumor or not stage NA or NB for lymph nodes). In addition to maximum standardized uptake value (SUVmax) measurements, a global breast metabolic tumor volume (MTV) was delineated using a semiautomatic segmentation method. Changes in SUVmax and MTV between PET1 and PET2 (ΔSUV1-2; ΔMTV1-2) and PET1 and PET3 (ΔSUV1-3; ΔMTV1-3) were measured.\n\n\nRESULTS\nMean SUVmax on PET1, PET2, and PET3 did not statistically differ between the 2 pathologic response groups. On receiver-operating-characteristic analysis, a 72% cutoff for ΔSUV1-3 provided the best performance to predict residual disease, with sensitivity, specificity, and accuracy of 61%, 80%, and 65%, respectively. On univariate analysis, the 72% cutoff for ΔSUV1-3 was the best predictor of distant metastasis-free survival (P = 0.05). On multivariate analysis, the 72% cutoff for ΔSUV1-3 was an independent predictor of distant metastasis-free survival (P = 0.01).\n\n\nCONCLUSION\nOur results emphasize the good predictive value of change in SUVmax between baseline and before surgery to assess pathologic response and survival in IBC patients undergoing NACT.", "title": "" }, { "docid": "neg:1840256_6", "text": "Although human perception appears to be automatic and unconscious, complex sensory mechanisms exist that form the preattentive component of understanding and lead to awareness. Considerable research has been carried out into these preattentive mechanisms and computational models have been developed for similar problems in the fields of computer vision and speech analysis. The focus here is to explore aural and visual information in video streams for modeling attention and detecting salient events. The separate aural and visual modules may convey explicit, complementary or mutually exclusive information around the detected audiovisual events. Based on recent studies on perceptual and computational attention modeling, we formulate measures of attention using features of saliency for the audiovisual stream. Audio saliency is captured by signal modulations and related multifrequency band features, extracted through nonlinear operators and energy tracking. Visual saliency is measured by means of a spatiotemporal attention model driven by various feature cues (intensity, color, motion). Features from both modules mapped to one-dimensional, time-varying saliency curves, from which statistics of salient segments can be extracted and important audio or visual events can be detected through adaptive, threshold-based mechanisms. Audio and video curves are integrated in a single attention curve, where events may be enhanced, suppressed or vanished. Salient events from the audiovisual curve are detected through geometrical features such as local extrema, sharp transitions and level sets. The potential of inter-module fusion and audiovisual event detection is demonstrated in applications such as video key-frame selection, video skimming and video annotation.", "title": "" }, { "docid": "neg:1840256_7", "text": "The transcription factor, nuclear factor erythroid 2 p45-related factor 2 (Nrf2), acts as a sensor of oxidative or electrophilic stresses and plays a pivotal role in redox homeostasis. Oxidative or electrophilic agents cause a conformational change in the Nrf2 inhibitory protein Keap1 inducing the nuclear translocation of the transcription factor which, through its binding to the antioxidant/electrophilic response element (ARE/EpRE), regulates the expression of antioxidant and detoxifying genes such as heme oxygenase 1 (HO-1). Nrf2 and HO-1 are frequently upregulated in different types of tumours and correlate with tumour progression, aggressiveness, resistance to therapy, and poor prognosis. This review focuses on the Nrf2/HO-1 stress response mechanism as a promising target for anticancer treatment which is able to overcome resistance to therapies.", "title": "" }, { "docid": "neg:1840256_8", "text": "Microinverters are module-level power electronic (MLPE) systems that are expected to have a service life more than 25 years. The general practice for providing assurance in long-term reliability under humid climatic conditions is to subject the microinverters to ‘damp heat test’ at 85°C/85%RH for 1000hrs as recommended in lEC 61215 standard. However, there is limited understanding on the correlation between the said ‘damp heat’ test and field conditions for microinverters. In this paper, a physics-of-failure (PoF)-based approach is used to correlate damp heat test to field conditions. Results of the PoF approach indicates that even 3000hrs at 85°C/85%RH may not be sufficient to guarantee 25-years' service life in certain places in the world. Furthermore, we also demonstrate that use of Miami, FL weathering data as benchmark for defining damp heat test durations will not be sufficient to guarantee 25 years' service life. Finally, when tests were conducted at 85°C/85%RH for more than 3000hrs, it was found that the PV connectors are likely to fail before the actual power electronics could fail.", "title": "" }, { "docid": "neg:1840256_9", "text": "Mass deployment of RF identification (RFID) is hindered by its cost per tag. The main cost comes from the application-specific integrated circuit (ASIC) chip set in a tag. A chipless tag costs less than a cent, and these have the potential for mass deployment for low-cost, item-level tagging as the replacement technology for optical barcodes. Chipless RFID tags can be directly printed on paper or plastic packets just like barcodes. They are highly useful for automatic identification and authentication, supply-chain automation, and medical applications. Among their potential industrial applications are authenticating of polymer bank notes; scanning of credit cards, library cards, and the like; tracking of inventory in retail settings; and identification of pathology and other medical test samples.", "title": "" }, { "docid": "neg:1840256_10", "text": "A modified method for better superpixel generation based on simple linear iterative clustering (SLIC) is presented and named BSLIC in this paper. By initializing cluster centers in hexagon distribution and performing k-means clustering in a limited region, the generated superpixels are shaped into regular and compact hexagons. The additional cluster centers are initialized as edge pixels to improve boundary adherence, which is further promoted by incorporating the boundary term into the distance calculation of the k-means clustering. Berkeley Segmentation Dataset BSDS500 is used to qualitatively and quantitatively evaluate the proposed BSLIC method. Experimental results show that BSLIC achieves an excellent compromise between boundary adherence and regularity of size and shape. In comparison with SLIC, the boundary adherence of BSLIC is increased by at most 12.43% for boundary recall and 3.51% for under segmentation error.", "title": "" }, { "docid": "neg:1840256_11", "text": "Large, relational factor graphs with structure defined by first-order logic or other languages give rise to notoriously difficult inference problems. Because unrolling the structure necessary to represent distributions over all hypotheses has exponential blow-up, solutions are often derived from MCMC. However, because of limitations in the design and parameterization of the jump function, these sampling-based methods suffer from local minima—the system must transition through lower-scoring configurations before arriving at a better MAP solution. This paper presents a new method of explicitly selecting fruitful downward jumps by leveraging reinforcement learning (RL) to model delayed reward with a log-linear function approximation of residual future score improvement. Our method provides dramatic empirical success, producing new state-of-the-art results on a complex joint model of ontology alignment, with a 48% reduction in error over state-of-the-art in that domain.", "title": "" }, { "docid": "neg:1840256_12", "text": "Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach.", "title": "" }, { "docid": "neg:1840256_13", "text": "Pinterest is a visual discovery tool for collecting and organizing content on the Web with over 70 million users. Users “pin” images, videos, articles, products, and other objects they find on the Web, and organize them into boards by topic. Other users can repin these and also follow other users or boards. Each user organizes things differently, and this produces a vast amount of human-curated content. For example, someone looking to decorate their home might pin many images of furniture that fits their taste. These curated collections produce a large number of associations between pins, and we investigate how to leverage these associations to surface personalized content to users. Little work has been done on the Pinterest network before due to lack of availability of data. We first performed an analysis on a representative sample of the Pinterest network. After analyzing the network, we created recommendation systems, suggesting pins that users would be likely to repin or like based on their previous interactions on Pinterest. We created recommendation systems using four approaches: a baseline recommendation system using the power law distribution of the images; a content-based filtering algorithm; and two collaborative filtering algorithms, one based on one-mode projection of a bipartite graph, and the second using a label propagation approach.", "title": "" }, { "docid": "neg:1840256_14", "text": "This article serves as a quick reference for respiratory alkalosis. Guidelines for analysis and causes, signs, and a stepwise approach are presented.", "title": "" }, { "docid": "neg:1840256_15", "text": "Broadcasting is a common operation in a network to resolve many issues. In a mobile ad hoc network (MANET) in particular, due to host mobility, such operations are expected to be executed more frequently (such as finding a route to a particular host, paging a particular host, and sending an alarm signal). Because radio signals are likely to overlap with others in a geographical area, a straightforward broadcasting by flooding is usually very costly and will result in serious redundancy, contention, and collision, to which we call the broadcast storm problem. In this paper, we identify this problem by showing how serious it is through analyses and simulations. We propose several schemes to reduce redundant rebroadcasts and differentiate timing of rebroadcasts to alleviate this problem. Simulation results are presented, which show different levels of improvement over the basic flooding approach.", "title": "" }, { "docid": "neg:1840256_16", "text": "The modernization of the US electric power infrastructure, especially in lieu of its aging, overstressed networks; shifts in social, energy and environmental policies, and also new vulnerabilities, is a national concern. Our system are required to be more adaptive and secure more than every before. Consumers are also demanding increased power quality and reliability of supply and delivery. As such, power industries, government and national laboratories and consortia have developed increased interest in what is now called the Smart Grid of the future. The paper outlines Smart Grid intelligent functions that advance interactions of agents such as telecommunication, control, and optimization to achieve adaptability, self-healing, efficiency and reliability of power systems. The author also presents a special case for the development of Dynamic Stochastic Optimal Power Flow (DSOPF) technology as a tool needed in Smart Grid design. The integration of DSOPF to achieve the design goals with advanced DMS capabilities are discussed herein. This reference paper also outlines research focus for developing next generation of advance tools for efficient and flexible power systems operation and control.", "title": "" }, { "docid": "neg:1840256_17", "text": "Image guided filtering has been widely used in many image processing applications. However, it is a local filtering method and has limited propagation ability. In this paper, we propose a new image filtering method: nonlocal image guided averaging (NLGA). Derived from a nonlocal linear model, the proposed method can utilize the nonlocal similarity of the guidance image, so that it can propagate nonlocal information reliably. Consequently, NLGA can obtain a sharper filtering results in the edge regions and more smooth results in the smooth regions. It shows superiority over image guided filtering in different applications, such as image dehazing, depth map super-resolution and image denoising.", "title": "" }, { "docid": "neg:1840256_18", "text": "Dengue infection is a major cause of morbidity and mortality in Malaysia. To date, much research on dengue infection conducted in Malaysia have been published. One hundred and sixty six articles related to dengue in Malaysia were found from a search through a database dedicated to indexing all original data relevant to medicine published between the years 2000-2013. Ninety articles with clinical relevance and future research implications were selected and reviewed. These papers showed evidence of an exponential increase in the disease epidemic and a varying pattern of prevalent dengue serotypes at different times. The early febrile phase of dengue infection consist of an undifferentiated fever. Clinical suspicion and ability to identify patients at risk of severe dengue infection is important. Treatment of dengue infection involves judicious use of volume expander and supportive care. Potential future research areas are discussed to narrow our current knowledge gaps on dengue infection.", "title": "" }, { "docid": "neg:1840256_19", "text": "Given the demand for authentic personal interactions over social media, it is unclear how much firms should actively manage their social media presence. We study this question empirically in a healthcare setting. We show empirically that active social media management drives more user-generated content. However, we find that this is due to an increase in incremental user postings from an organization’s employees rather than from its clients. This result holds when we explore exogenous variation in social media policies, employees and clients that are explained by medical marketing laws, medical malpractice laws and distortions in Medicare incentives. Further examination suggests that content being generated mainly by employees can be avoided if a firm’s postings are entirely client-focused. However, empirically the majority of firm postings seem not to be specifically targeted to clients’ interests, instead highlighting more general observations or achievements of the firm itself. We show that untargeted postings like this provoke activity by employees rather than clients. This may not be a bad thing, as employee-generated content may help with employee motivation, recruitment or retention, but it does suggest that social media should not be funded or managed exclusively as a marketing function of the firm. ∗Economics Department, University of Virginia, Charlottesville, VA and RAND Corporation †MIT Sloan School of Management, MIT, Cambridge, MA and NBER ‡All errors are our own.", "title": "" } ]
1840257
Estimation accuracy of a vector-controlled frequency converter used in the determination of the pump system operating state
[ { "docid": "pos:1840257_0", "text": "rotor field orientation stator field orientation stator model rotor model MRAS, observers, Kalman filter parasitic properties field angle estimation Abstract — Controlled induction motor drives without mechanical speed sensors at the motor shaft have the attractions of low cost and high reliability. To replace the sensor, the information on the rotor speed is extracted from measured stator voltages and currents at the motor terminals. Vector controlled drives require estimating the magnitude and spatial orientation of the fundamental magnetic flux waves in the stator or in the rotor. Open loop estimators or closed loop observers are used for this purpose. They differ with respect to accuracy, robustness, and sensitivity against model parameter variations. Dynamic performance and steady-state speed accuracy in the low speed range can be achieved by exploiting parasitic effects of the machine. The overview in this paper uses signal flow graphs of complex space vector quantities to provide an insightful description of the systems used in sensorless control of induction motors.", "title": "" }, { "docid": "pos:1840257_1", "text": "The basic evolution of direct torque control from other drive types is explained. Qualitative comparisons with other drives are included. The basic concepts behind direct torque control are clarified. An explanation of direct self-control and the field orientation concepts implemented in the adaptive motor model block is presented. The reliance of the control method on fast processing techniques is stressed. The theoretical foundations for the control concept are provided in summary format. Information on the ancillary control blocks outside the basic direct torque control is given. The implementation of special functions directly related to the control approach is described. Finally, performance data from an actual system is presented.", "title": "" } ]
[ { "docid": "neg:1840257_0", "text": "Online personal health record (PHR) enables patients to manage their own medical records in a centralized way, which greatly facilitates the storage, access and sharing of personal health data. With the emergence of cloud computing, it is attractive for the PHR service providers to shift their PHR applications and storage into the cloud, in order to enjoy the elastic resources and reduce the operational cost. However, by storing PHRs in the cloud, the patients lose physical control to their personal health data, which makes it necessary for each patient to encrypt her PHR data before uploading to the cloud servers. Under encryption, it is challenging to achieve fine-grained access control to PHR data in a scalable and efficient way. For each patient, the PHR data should be encrypted so that it is scalable with the number of users having access. Also, since there are multiple owners (patients) in a PHR system and every owner would encrypt her PHR files using a different set of cryptographic keys, it is important to reduce the key distribution complexity in such multi-owner settings. Existing cryptographic enforced access control schemes are mostly designed for the single-owner scenarios. In this paper, we propose a novel framework for access control to PHRs within cloud computing environment. To enable fine-grained and scalable access control for PHRs, we leverage attribute based encryption (ABE) techniques to encrypt each patients’ PHR data. To reduce the key distribution complexity, we divide the system into multiple security domains, where each domain manages only a subset of the users. In this way, each patient has full control over her own privacy, and the key management complexity is reduced dramatically. Our proposed scheme is also flexible, in that it supports efficient and on-demand revocation of user access rights, and break-glass access under emergency scenarios.", "title": "" }, { "docid": "neg:1840257_1", "text": "This paper reports our recent finding that a laser that is radiated on a thin light-absorbing elastic medium attached on the skin can elicit a tactile sensation of mechanical tap. Laser radiation to the elastic medium creates inner elastic waves on the basis of thermoelastic effects, which subsequently move the medium and stimulate the skin. We characterize the associated stimulus by measuring its physical properties. In addition, the perceptual identity of the stimulus is confirmed by comparing it to mechanical and electrical stimuli by means of perceptual spaces. All evidence claims that indirect laser radiation conveys a sensation of short mechanical tap with little individual difference. To the best of our knowledge, this is the first study that discovers the possibility of using indirect laser radiation for mid-air tactile rendering.", "title": "" }, { "docid": "neg:1840257_2", "text": "Reading is a hobby to open the knowledge windows. Besides, it can provide the inspiration and spirit to face this life. By this way, concomitant with the technology development, many companies serve the e-book or book in soft file. The system of this book of course will be much easier. No worry to forget bringing the statistics and chemometrics for analytical chemistry book. You can open the device and get the book by on-line.", "title": "" }, { "docid": "neg:1840257_3", "text": "Correctness of SQL queries is usually tested by executing the queries on one or more datasets. Erroneous queries are often the results of small changes or mutations of the correct query. A mutation Q $$'$$ ′ of a query Q is killed by a dataset D if Q(D) $$\\ne $$ ≠ Q $$'$$ ′ (D). Earlier work on the XData system showed how to generate datasets that kill all mutations in a class of mutations that included join type and comparison operation mutations. In this paper, we extend the XData data generation techniques to handle a wider variety of SQL queries and a much larger class of mutations. We have also built a system for grading SQL queries using the datasets generated by XData. We present a study of the effectiveness of the datasets generated by the extended XData approach, using a variety of queries including queries submitted by students as part of a database course. We show that the XData datasets outperform predefined datasets as well as manual grading done earlier by teaching assistants, while also avoiding the drudgery of manual correction. Thus, we believe that our techniques will be of great value to database course instructors and TAs, particularly to those of MOOCs. It will also be valuable to database application developers and testers for testing SQL queries.", "title": "" }, { "docid": "neg:1840257_4", "text": "A smart phone is a handheld device that combines the functionality of a cellphone, a personal digital assistant (PDA) and other information appliances such a music player. These devices can however be used in a crime and would have to be quickly analysed for evidence. This data is collected using either a forensic tool which resides on a PC or specialised hardware. This paper proposes the use of an on-phone forensic tool to collect the contents of the device and store it on removable storage. This approach requires less equipment and can retrieve the volatile information that resides on the phone such as running processes. The paper discusses the Symbian operating system, the evidence that is stored on the device and contrasts the approach with that followed by other tools.", "title": "" }, { "docid": "neg:1840257_5", "text": "Blind image deconvolution: theory and applications Images are ubiquitous and indispensable in science and everyday life. Mirroring the abilities of our own human visual system, it is natural to display observations of the world in graphical form. Images are obtained in areas 1 2 Blind Image Deconvolution: problem formulation and existing approaches ranging from everyday photography to astronomy, remote sensing, medical imaging, and microscopy. In each case, there is an underlying object or scene we wish to observe; the original or true image is the ideal representation of the observed scene. Yet the observation process is never perfect: there is uncertainty in the measurements , occurring as blur, noise, and other degradations in the recorded images. Digital image restoration aims to recover an estimate of the original image from the degraded observations. The key to being able to solve this ill-posed inverse problem is proper incorporation of prior knowledge about the original image into the restoration process. Classical image restoration seeks an estimate of the true image assuming the blur is known. In contrast, blind image restoration tackles the much more difficult, but realistic, problem where the degradation is unknown. In general, the degradation is nonlinear (including, for example, saturation and quantization) and spatially varying (non uniform motion, imperfect optics); however, for most of the work, it is assumed that the observed image is the output of a Linear Spatially Invariant (LSI) system to which noise is added. Therefore it becomes a Blind Deconvolution (BD) problem, with the unknown blur represented as a Point Spread Function (PSF). Classical restoration has matured since its inception, in the context of space exploration in the 1960s, and numerous techniques can be found in the literature (for recent reviews see [1, 2]). These differ primarily in the prior information about the image they include to perform the restoration task. The earliest algorithms to tackle the BD problem appeared as long ago as the mid-1970s [3, 4], and attempted to identify known patterns in the blur; a small but dedicated effort followed through the late 1980s (see for instance [5, 6, 7, 8, 9]), and a resurgence was seen in the 1990s (see the earlier reviews in [10, 11]). Since then, the area has been extensively explored by the signal processing , astronomical, and optics communities. Many of the BD algorithms have their roots in estimation theory, linear algebra, and numerical analysis. An important question …", "title": "" }, { "docid": "neg:1840257_6", "text": "In this paper, we consider the problem of approximating the densest subgraph in the dynamic graph stream model. In this model of computation, the input graph is defined by an arbitrary sequence of edge insertions and deletions and the goal is to analyze properties of the resulting graph given memory that is sub-linear in the size of the stream. We present a single-pass algorithm that returns a (1 + ) approximation of the maximum density with high probability; the algorithm uses O( −2npolylog n) space, processes each stream update in polylog(n) time, and uses poly(n) post-processing time where n is the number of nodes. The space used by our algorithm matches the lower bound of Bahmani et al. (PVLDB 2012) up to a poly-logarithmic factor for constant . The best existing results for this problem were established recently by Bhattacharya et al. (STOC 2015). They presented a (2 + ) approximation algorithm using similar space and another algorithm that both processed each update and maintained a (4 + ) approximation of the current maximum density in polylog(n) time per-update.", "title": "" }, { "docid": "neg:1840257_7", "text": "You are smart to question how different medications interact when used concurrently. Champix, called Chantix in the United States and globally by its generic name varenicline [2], is a prescription medication that can help individuals quit smoking by partially stimulating nicotine receptors in cells throughout the body. Nicorette gum, a type of nicotine replacement therapy (NRT), is also a tool to help smokers quit by providing individuals with the nicotine they crave by delivering the substance in controlled amounts through the lining of the mouth. NRT is available in many other forms including lozenges, patches, inhalers, and nasal sprays. The short answer is that there is disagreement among researchers about whether or not there are negative consequences to chewing nicotine gum while taking varenicline. While some studies suggest no harmful side effects to using them together, others have found that adverse effects from using both at the same time. So, what does the current evidence say?", "title": "" }, { "docid": "neg:1840257_8", "text": "This paper presents a modified priority based probe algorithm for deadlock detection and resolution in distributed database systems. The original priority based probe algorithm was presented by Sinha and Natarajan based on work by Chandy, Misra, and Haas. Various examples are used to show that the original priority based algorithm either fails to detect deadlocks or reports deadlocks which do not exist in many situations. A modified algorithm which eliminates these problems is proposed. This algorithm has been tested through simulation and appears to be error free. Finally, the performance of the modified algorithm is briefly discussed.", "title": "" }, { "docid": "neg:1840257_9", "text": "Common video systems for laparoscopy provide the surgeon a two-dimensional image (2D), where information on spatial depth can be derived only from secondary spatial depth cues and experience. Although the advantage of stereoscopy for surgical task efficiency has been clearly shown, several attempts to introduce three-dimensional (3D) video systems into clinical routine have failed. The aim of this study is to evaluate users’ performances in standardised surgical phantom model tasks using 3D HD visualisation compared with 2D HD regarding precision and working speed. This comparative study uses a 3D HD video system consisting of a dual-channel laparoscope, a stereoscopic camera, a camera controller with two separate outputs and a wavelength multiplex stereoscopic monitor. Each of 20 medical students and 10 laparoscopically experienced surgeons (more than 100 laparoscopic cholecystectomies each) pre-selected in a stereo vision test were asked to perform one task to familiarise themselves with the system and subsequently a set of five standardised tasks encountered in typical surgical procedures. The tasks were performed under either 3D or 2D conditions at random choice and subsequently repeated under the other vision condition. Predefined errors were counted, and time needed was measured. In four of the five tasks the study participants made fewer mistakes in 3D than in 2D vision. In four of the tasks they needed significantly more time in the 2D mode. Both the student group and the surgeon group showed similarly improved performance, while the surgeon group additionally saved more time on difficult tasks. This study shows that 3D HD using a state-of-the-art 3D monitor permits superior task efficiency, even as compared with the latest 2D HD video systems.", "title": "" }, { "docid": "neg:1840257_10", "text": "Extractive summarization typically uses sentences as summarization units. In contrast, joint compression and summarization can use smaller units such as words and phrases, resulting in summaries containing more information. The goal of compressive summarization is to find a subset of words that maximize the total score of concepts and cutting dependency arcs under the grammar constraints and summary length constraint. We propose an efficient decoding algorithm for fast compressive summarization using graph cuts. Our approach first relaxes the length constraint using Lagrangian relaxation. Then we propose to bound the relaxed objective function by the supermodular binary quadratic programming problem, which can be solved efficiently using graph max-flow/min-cut. Since finding the tightest lower bound suffers from local optimality, we use convex relaxation for initialization. Experimental results on TAC2008 dataset demonstrate our method achieves competitive ROUGE score and has good readability, while is much faster than the integer linear programming (ILP) method.", "title": "" }, { "docid": "neg:1840257_11", "text": "The SIMC method for PID controller tuning (Skogestad 2003) has already found widespread industrial usage in Norway. This chapter gives an updated overview of the method, mainly from a user’s point of view. The basis for the SIMC method is a first-order plus time delay model, and we present a new effective method to obtain the model from a simple closed-loop experiment. An important advantage of the SIMC rule is that there is a single tuning parameter (τc) that gives a good balance between the PID parameters (Kc,τI ,τD), and which can be adjusted to get a desired trade-off between performance (“tight” control) and robustness (“smooth” control). Compared to the original paper of Skogestad (2003), the choice of the tuning parameter τc is discussed in more detail, and lower and upper limits are presented for tight and smooth tuning, respectively. Finally, the optimality of the SIMC PI rules is studied by comparing the performance (IAE) versus robustness (Ms) trade-off with the Pareto-optimal curve. The difference is small which leads to the conclusion that the SIMC rules are close to optimal. The only exception is for pure time delay processes, so we introduce the “improved” SIMC rule to improve the performance for this case. Chapter for PID book (planned: Springer, 2011, Editor: R. Vilanova) This version: September 7, 2011 Sigurd Skogestad Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU), Trondheim, e-mail: skoge@ntnu.no Chriss Grimholt Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU), Trondheim", "title": "" }, { "docid": "neg:1840257_12", "text": "The burden of entry into mobile crowdsensing (MCS) is prohibitively high for human-subject researchers who lack a technical orientation. As a result, the benefits of MCS remain beyond the reach of research communities (e.g., psychologists) whose expertise in the study of human behavior might advance applications and understanding of MCS systems. This paper presents Sensus, a new MCS system for human-subject studies that bridges the gap between human-subject researchers and MCS methods. Sensus alleviates technical burdens with on-device, GUI-based design of sensing plans, simple and efficient distribution of sensing plans to study participants, and uniform participant experience across iOS and Android devices. Sensing plans support many hardware and software sensors, automatic deployment of sensor-triggered surveys, and double-blind assignment of participants within randomized controlled trials. Sensus offers these features to study designers without requiring knowledge of markup and programming languages. We demonstrate the feasibility of using Sensus within two human-subject studies, one in psychology and one in engineering. Feedback from non-technical users indicates that Sensus is an effective and low-burden system for MCS-based data collection and analysis.", "title": "" }, { "docid": "neg:1840257_13", "text": "This research seeks to validate a comprehensive model of quality in the context of e-business systems. It also extends the UTAUT model with e-quality, trust, and satisfaction constructs. The proposed model brings together extant research on systems and data quality, trust, and satisfaction and provides an important cluster of antecedents to eventual technology acceptance via constructs of behavioral intention to use and actual system usage.", "title": "" }, { "docid": "neg:1840257_14", "text": "For five years, we collected annual snapshots of file-system metadata from over 60,000 Windows PC file systems in a large corporation. In this article, we use these snapshots to study temporal changes in file size, file age, file-type frequency, directory size, namespace structure, file-system population, storage capacity and consumption, and degree of file modification. We present a generative model that explains the namespace structure and the distribution of directory sizes. We find significant temporal trends relating to the popularity of certain file types, the origin of file content, the way the namespace is used, and the degree of variation among file systems, as well as more pedestrian changes in size and capacities. We give examples of consequent lessons for designers of file systems and related software.", "title": "" }, { "docid": "neg:1840257_15", "text": "Software failures due to configuration errors are commonplace as computer systems continue to grow larger and more complex. Troubleshooting these configuration errors is a major administration cost, especially in server clusters where problems often go undetected without user interference. This paper presents CODE–a tool that automatically detects software configuration errors. Our approach is based on identifying invariant configuration access rules that predict what access events follow what contexts. It requires no source code, application-specific semantics, or heavyweight program analysis. Using these rules, CODE can sift through a voluminous number of events and detect deviant program executions. This is in contrast to previous approaches that focus on only diagnosis. In our experiments, CODE successfully detected a real configuration error in one of our deployment machines, in addition to 20 user-reported errors that we reproduced in our test environment. When analyzing month-long event logs from both user desktops and production servers, CODE yielded a low false positive rate. The efficiency ofCODE makes it feasible to be deployed as a practical management tool with low overhead.", "title": "" }, { "docid": "neg:1840257_16", "text": "In this paper, we present our work of humor recognition on Twitter, which will facilitate affect and sentimental analysis in the social network. The central question of what makes a tweet (Twitter post) humorous drives us to design humor-related features, which are derived from influential humor theories, linguistic norms, and affective dimensions. Using machine learning techniques, we are able to recognize humorous tweets with high accuracy and F-measure. More importantly, we single out features that contribute to distinguishing non-humorous tweets from humorous tweets, and humorous tweets from other short humorous texts (non-tweets). This proves that humorous tweets possess discernible characteristics that are neither found in plain tweets nor in humorous non-tweets. We believe our novel findings will inform and inspire the burgeoning field of computational humor research in the social media.", "title": "" }, { "docid": "neg:1840257_17", "text": "Plenoptic cameras, constructed with internal microlens arrays, focus those microlenses at infinity in order to sample the 4D radiance directly at the microlenses. The consequent assumption is that each microlens image is completely defocused with respect to to the image created by the main camera lens and the outside object. As a result, only a single pixel in the final image can be rendered from it, resulting in disappointingly low resolution. In this paper, we present a new approach to lightfield capture and image rendering that interprets the microlens array as an imaging system focused on the focal plane of the main camera lens. This approach captures a lightfield with significantly higher spatial resolution than the traditional approach, allowing us to render high resolution images that meet the expectations of modern photographers. Although the new approach samples the lightfield with reduced angular density, analysis and experimental results demonstrate that there is sufficient parallax to completely support lightfield manipulation algorithms such as refocusing and novel views", "title": "" }, { "docid": "neg:1840257_18", "text": "Super-coiled polymer (SCP) artificial muscles have many attractive properties, such as high energy density, large contractions, and good dynamic range. To fully utilize them for robotic applications, it is necessary to determine how to scale them up effectively. Bundling of SCP actuators, as though they are individual threads in woven textiles, can demonstrate the versatility of SCP actuators and artificial muscles in general. However, this versatility comes with a need to understand how different bundling techniques can be achieved with these actuators and how they may trade off in performance. This letter presents the first quantitative comparison, analysis, and modeling of bundled SCP actuators. By exploiting weaving and braiding techniques, three new types of bundled SCP actuators are created: woven bundles, two-dimensional, and three-dimensional braided bundles. The bundle performance is adjustable by employing different numbers of individual actuators. Experiments are conducted to characterize and compare the force, strain, and speed of different bundles, and a linear model is proposed to predict their performance. This work lays the foundation for model-based SCP-actuated textiles, and physically scaling robots that employ SCP actuators as the driving mechanism.", "title": "" } ]
1840258
Neural Variational Inference for Text Processing
[ { "docid": "pos:1840258_0", "text": "This paper presents a system which learns to answer questions on a broad range of topics from a knowledge base using few handcrafted features. Our model learns low-dimensional embeddings of words and knowledge base constituents; these representations are used to score natural language questions against candidate answers. Training our system using pairs of questions and structured representations of their answers, and pairs of question paraphrases, yields competitive results on a recent benchmark of the literature.", "title": "" }, { "docid": "pos:1840258_1", "text": "We develop a semantic parsing framework based on semantic similarity for open domain question answering (QA). We focus on single-relation questions and decompose each question into an entity mention and a relation pattern. Using convolutional neural network models, we measure the similarity of entity mentions with entities in the knowledge base (KB) and the similarity of relation patterns and relations in the KB. We score relational triples in the KB using these measures and select the top scoring relational triple to answer the question. When evaluated on an open-domain QA task, our method achieves higher precision across different recall points compared to the previous approach, and can improve F1 by 7 points.", "title": "" }, { "docid": "pos:1840258_2", "text": "We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm.", "title": "" } ]
[ { "docid": "neg:1840258_0", "text": "We report a case of planned complex suicide (PCS) by a young man who had previously tried to commit suicide twice. He was found dead hanging by his neck, with a shot in his head. The investigation of the scene, the method employed, and previous attempts at suicide altogether pointed toward a suicidal etiology. The main difference between PCS and those cases defined in the medicolegal literature as combined suicides lies in the complex mechanism used by the victim as a protection against a failure in one of the mechanisms.", "title": "" }, { "docid": "neg:1840258_1", "text": "Image steganalysis is to discriminate innocent images and those suspected images with hidden messages. This task is very challenging for modern adaptive steganography, since modifications due to message hiding are extremely small. Recent studies show that Convolutional Neural Networks (CNN) have demonstrated superior performances than traditional steganalytic methods. Following this idea, we propose a novel CNN model for image steganalysis based on residual learning. The proposed Deep Residual learning based Network (DRN) shows two attractive properties than existing CNN based methods. First, the model usually contains a large number of network layers, which proves to be effective to capture the complex statistics of digital images. Second, the residual learning in DRN preserves the stego signal coming from secret messages, which is extremely beneficial for the discrimination of cover images and stego images. Comprehensive experiments on standard dataset show that the DRN model can detect the state of arts steganographic algorithms at a high accuracy. It also outperforms the classical rich model method and several recently proposed CNN based methods.", "title": "" }, { "docid": "neg:1840258_2", "text": "Encrypted data search allows cloud to offer fundamental information retrieval service to its users in a privacy-preserving way. In most existing schemes, search result is returned by a semi-trusted server and usually considered authentic. However, in practice, the server may malfunction or even be malicious itself. Therefore, users need a result verification mechanism to detect the potential misbehavior in this computation outsourcing model and rebuild their confidence in the whole search process. On the other hand, cloud typically hosts large outsourced data of users in its storage. The verification cost should be efficient enough for practical use, i.e., it only depends on the corresponding search operation, regardless of the file collection size. In this paper, we are among the first to investigate the efficient search result verification problem and propose an encrypted data search scheme that enables users to conduct secure conjunctive keyword search, update the outsourced file collection and verify the authenticity of the search result efficiently. The proposed verification mechanism is efficient and flexible, which can be either delegated to a public trusted authority (TA) or be executed privately by data users. We formally prove the universally composable (UC) security of our scheme. Experimental result shows its practical efficiency even with a large dataset.", "title": "" }, { "docid": "neg:1840258_3", "text": "The spatial organisation of museums and its influence on the visitor experience has been the subject of numerous studies. Previous research, despite reporting some actual behavioural correlates, rarely had the possibility to investigate the cognitive processes of the art viewers. In the museum context, where spatial layout is one of the most powerful curatorial tools available, attention and memory can be measured as a means of establishing whether or not the gallery fulfils its function as a space for contemplating art. In this exploratory experiment, 32 participants split into two groups explored an experimental, non-public exhibition and completed two unanticipated memory tests afterwards. The results show that some spatial characteristics of an exhibition can inhibit the recall of pictures and shift the focus to perceptual salience of the artworks.", "title": "" }, { "docid": "neg:1840258_4", "text": "Fingerprint biometric is one of the most successful biometrics applied in both forensic law enforcement and security applications. Recent developments in fingerprint acquisition technology have resulted in touchless live scan devices that generate 3D representation of fingerprints, and thus can overcome the deformation and smearing problems caused by conventional contact-based acquisition techniques. However, there are yet no 3D full fingerprint databases with their corresponding 2D prints for fingerprint biometric research. This paper presents a 3D fingerprint database we have established in order to investigate the 3D fingerprint biometric comprehensively. It consists of 3D fingerprints as well as their corresponding 2D fingerprints captured by two commercial fingerprint scanners from 150 subjects in Australia. Besides, we have tested the performance of 2D fingerprint verification, 3D fingerprint verification, and 2D to 3D fingerprint verification. The results show that more work is needed to improve the performance of 2D to 3D fingerprint verification. In addition, the database is expected to be released publicly in late 2014.", "title": "" }, { "docid": "neg:1840258_5", "text": "BACKGROUND\nAlthough substantial evidence suggests that stressful life events predispose to the onset of episodes of depression and anxiety, the essential features of these events that are depressogenic and anxiogenic remain uncertain.\n\n\nMETHODS\nHigh contextual threat stressful life events, assessed in 98 592 person-months from 7322 male and female adult twins ascertained from a population-based registry, were blindly rated on the dimensions of humiliation, entrapment, loss, and danger and their categories. Onsets of pure major depression (MD), pure generalized anxiety syndrome (GAS) (defined as generalized anxiety disorder with a 2-week minimum duration), and mixed MD-GAS episodes were examined using logistic regression.\n\n\nRESULTS\nOnsets of pure MD and mixed MD-GAS were predicted by higher ratings of loss and humiliation. Onsets of pure GAS were predicted by higher ratings of loss and danger. High ratings of entrapment predicted only onsets of mixed episodes. The loss categories of death and respondent-initiated separation predicted pure MD but not pure GAS episodes. Events with a combination of humiliation (especially other-initiated separation) and loss were more depressogenic than pure loss events, including death. No sex differences were seen in the prediction of episodes of illness by event categories.\n\n\nCONCLUSIONS\nIn addition to loss, humiliating events that directly devalue an individual in a core role were strongly linked to risk for depressive episodes. Event dimensions and categories that predispose to pure MD vs pure GAS episodes can be distinguished with moderate specificity. The event dimensions that preceded mixed MD-GAS episodes were largely the sum of those that preceded pure MD and pure GAS episodes.", "title": "" }, { "docid": "neg:1840258_6", "text": "The rapid growth of the Internet has brought with it an exponential increase in the type and frequency of cyber attacks. Many well-known cybersecurity solutions are in place to counteract these attacks. However, the generation of Big Data over computer networks is rapidly rendering these traditional solutions obsolete. To cater for this problem, corporate research is now focusing on Security Analytics, i.e., the application of Big Data Analytics techniques to cybersecurity. Analytics can assist network managers particularly in the monitoring and surveillance of real-time network streams and real-time detection of both malicious and suspicious (outlying) patterns. Such a behavior is envisioned to encompass and enhance all traditional security techniques. This paper presents a comprehensive survey on the state of the art of Security Analytics, i.e., its description, technology, trends, and tools. It hence aims to convince the reader of the imminent application of analytics as an unparalleled cybersecurity solution in the near future.", "title": "" }, { "docid": "neg:1840258_7", "text": "Research has shown close connections between personality and subjective well-being (SWB), suggesting that personality traits predispose individuals to experience different levels of SWB. Moreover, numerous studies have shown that self-efficacy is related to both personality factors and SWB. Extending previous research, we show that general self-efficacy functionally connects personality factors and two components of SWB (life satisfaction and subjective happiness). Our results demonstrate the mediating role of self-efficacy in linking personality factors and SWB. Consistent with our expectations, the influence of neuroticism, extraversion, openness, and conscientiousness on life satisfaction was mediated by self-efficacy. Furthermore, self-efficacy mediated the influence of openness and conscientiousness, but not that of neuroticism and extraversion, on subjective happiness. Results highlight the importance of cognitive beliefs in functionally linking personality traits and SWB.", "title": "" }, { "docid": "neg:1840258_8", "text": "This thesis is devoted to marker-less 3D human motion tracking in calibrated and synchronized multicamera systems. Pose estimation is based on a 3D model, which is transformed into the image plane and then rendered. Owing to elaborated techniques the tracking of the full body has been achieved in real-time via dynamic optimization or dynamic Bayesian filtering. The objective function of a particle swarm optimization algorithm and the observation model of a particle filter are based on matching between the rendered 3D models in the required poses and image features representing the extracted person. In such an approach the main part of the computational overload is associated with the rendering of 3D models in hypothetical poses as well as determination of value of objective function. Effective methods for rendering of 3D models in real-time with support of OpenGL as well as parallel methods for determining the objective function on the GPU were developed. The elaborated solutions permit 3D tracking of full body motion in real-time.", "title": "" }, { "docid": "neg:1840258_9", "text": "A fully automatic document retrieval system operating on the IBM 7094 is described. The system is characterized by the fact that several hundred different methods are available to analyze documents and search requests. This feature is used in the retrieval process by leaving the exact sequence of operations initially unspecified, and adapting the search strategy to the needs of individual users. The system is used not only to simulate an actual operating environment, but also to test the effectiveness of the various available processing methods. Results obtained so far seem to indicate that some combination of analysis procedures can in general be relied upon to retrieve the wanted information. A typical search request is used as an example in the present report to illustrate systems operations and evaluation procedures .", "title": "" }, { "docid": "neg:1840258_10", "text": "High profile attacks such as Stuxnet and the cyber attack on the Ukrainian power grid have increased research in Industrial Control System (ICS) and Supervisory Control and Data Acquisition (SCADA) network security. However, due to the sensitive nature of these networks, there is little publicly available data for researchers to evaluate the effectiveness of the proposed solution. The lack of representative data sets makes evaluation and independent validation of emerging security solutions difficult and slows down progress towards effective and reusable solutions. This paper presents our work to generate representative labeled data sets for SCADA networks that security researcher can use freely. The data sets include packet captures including both malicious and non-malicious Modbus traffic and accompanying CSV files that contain labels to provide the ground truth for supervised machine learning. To provide representative data at the network level, the data sets were generated in a SCADA sandbox, where electrical network simulators were used to introduce realism in the physical component. Also, real attack tools, some of them custom built for Modbus networks, were used to generate the malicious traffic. Even though they do not fully replicate a production network, these data sets represent a good baseline to validate detection tools for SCADA systems.", "title": "" }, { "docid": "neg:1840258_11", "text": "We present Query-Regression Network (QRN), a variant of Recurrent Neural Network (RNN) that is suitable for end-to-end machine comprehension. While previous work [18, 22] largely relied on external memory and global softmax attention mechanism, QRN is a single recurrent unit with internal memory and local sigmoid attention. Unlike most RNN-based models, QRN is able to effectively handle long-term dependencies and is highly parallelizable. In our experiments we show that QRN obtains the state-of-the-art result in end-to-end bAbI QA tasks [21].", "title": "" }, { "docid": "neg:1840258_12", "text": "Searle (1989) posits a set of adequacy criteria for any account of the meaning and use of performative verbs, such as order or promise. Central among them are: (a) performative utterances are performances of the act named by the performative verb; (b) performative utterances are self-verifying; (c) performative utterances achieve (a) and (b) in virtue of their literal meaning. He then argues that the fundamental problem with assertoric accounts of performatives is that they fail (b), and hence (a), because being committed to having an intention does not guarantee having that intention. Relying on a uniform meaning for verbs on their reportative and performative uses, we propose an assertoric analysis of performative utterances that does not require an actual intention for deriving (b), and hence can meet (a) and (c). Explicit performative utterances are those whose illocutionary force is made explicit by the verbs appearing in them (Austin 1962): (1) I (hereby) promise you to be there at five. (is a promise) (2) I (hereby) order you to be there at five. (is an order) (3) You are (hereby) ordered to report to jury duty. (is an order) (1)–(3) look and behave syntactically like declarative sentences in every way. Hence there is no grammatical basis for the once popular claim that I promise/ order spells out a ‘performative prefix’ that is silent in all other declaratives. Such an analysis, in any case, leaves unanswered the question of how illocutionary force is related to compositional meaning and, consequently, does not explain how the first person and present tense are special, so that first-person present tense forms can spell out performative prefixes, while others cannot. Minimal variations in person or tense remove the ‘performative effect’: (4) I promised you to be there at five. (is not a promise) (5) He promises to be there at five. (is not a promise) An attractive idea is that utterances of sentences like those in (1)–(3) are asser∗ The names of the authors appear in alphabetical order. 150 Condoravdi & Lauer tions, just like utterances of other declaratives, whose truth is somehow guaranteed. In one form or another, this basic strategy has been pursued by a large number of authors ever since Austin (1962) (Lemmon 1962; Hedenius 1963; Bach & Harnish 1979; Ginet 1979; Bierwisch 1980; Leech 1983; among others). One type of account attributes self-verification to meaning proper. Another type, most prominently exemplified by Bach & Harnish (1979), tries to derive the performative effect by means of an implicature-like inference that the hearer may draw based on the utterance of the explicit performative. Searle’s (1989) Challenge Searle (1989) mounts an argument against analyses of explicit performative utterances as self-verifying assertions. He takes the argument to show that an assertoric account is impossible. Instead, we take it to pose a challenge that can be met, provided one supplies the right semantics for the verbs involved. Searle’s argument is based on the following desiderata he posits for any theory of explicit performatives: (a) performative utterances are performances of the act named by the performative verb; (b) performative utterances are self-guaranteeing; (c) performative utterances achieve (a) and (b) in virtue of their literal meaning, which, in turn, ought to be based on a uniform lexical meaning of the verb across performative and reportative uses. According to Searle’s speech act theory, making a promise requires that the promiser intend to do so, and similarly for other performative verbs (the sincerity condition). It follows that no assertoric account can meet (a-c): An assertion cannot ensure that the speaker has the necessary intention. “Such an assertion does indeed commit the speaker to the existence of the intention, but the commitment to having the intention doesn’t guarantee the actual presence of the intention.” Searle (1989: 546) Hence assertoric accounts must fail on (b), and, a forteriori, on (a) and (c).1 Although Searle’s argument is valid, his premise that for truth to be guaranteed the speaker must have a particular intention is questionable. In the following, we give an assertoric account that delivers on (a-c). We aim for an 1 It should be immediately clear that inference-based accounts cannot meet (a-c) above. If the occurrence of the performative effect depends on the hearer drawing an inference, then such sentences could not be self-verifying, for the hearer may well fail to draw the inference. Performative Verbs and Performative Acts 151 account on which the assertion of the explicit performative is the performance of the act named by the performative verb. No hearer inferences are necessary. 1 Reportative and Performative Uses What is the meaning of the word order, then, so that it can have both reportative uses – as in (6) – and performative uses – as in (7)? (6) A ordered B to sign the report. (7) [A to B] I order you to sign the report now. The general strategy in this paper will be to ask what the truth conditions of reportative uses of performative verbs are, and then see what happens if these verbs are put in the first person singular present tense. The reason to start with the reportative uses is that speakers have intuitions about their truth conditions. This is not true for performative uses, because these are always true when uttered, obscuring the truth-conditional content of the declarative sentence.2 An assertion of (6) takes for granted that A presumed to have authority over B and implies that there was a communicative act from A to B. But what kind of communicative act? (7) or, in the right context, (8a-c) would suffice. (8) a. Sign the report now! b. You must sign the report now! c. I want you to sign the report now! What do these sentences have in common? We claim it is this: In the right context they commit A to a particular kind of preference for B signing the report immediately. If B accepts the utterance, he takes on a commitment to act as though he, too, prefers signing the report. If the report is co-present with A and B, he will sign it, if the report is in his office, he will leave to go there immediately, and so on. To comply with an order to p is to act as though one prefers p. One need not actually prefer it, but one has to act as if one did. The authority mentioned above amounts to this acceptance being socially or institutionally mandated. Of course, B has the option to refuse to take on this commitment, in either of two ways: (i) he can deny A’s authority, (ii) while accepting the authority, he can refuse to abide by it, thereby violating the institutional or social mandate. Crucially, in either case, (6) will still be true, as witnessed by the felicity of: 2 Szabolcsi (1982), in one of the earliest proposals for a compositional semantics of performative utterances, already pointed out the importance of reportative uses. 152 Condoravdi & Lauer (9) a. (6), but B refused to do it. b. (6), but B questioned his authority. Not even uptake by the addressee is necessary for order to be appropriate, as seen in (10) and the naturally occurring (11):3 (10) (6), but B did not hear him. (11) He ordered Kornilov to desist but either the message failed to reach the general or he ignored it.4 What is necessary is that the speaker expected uptake to happen, arguably a minimal requirement for an act to count as a communicative event. To sum up, all that is needed for (6) to be true and appropriate is that (i) there is a communicative act from A to B which commits A to a preference for B signing the report immediately and (ii) A presumes to have authority over B. The performative effect arises precisely when the utterance itself is a witness for the existential claim in (i). There are two main ingredients in the meaning of order informally outlined above: the notion of a preference, in particular a special kind of preference that guides action, and the notion of a commitment. The next two sections lay some conceptual groundwork before we spell out our analysis in section 4. 2 Representing Preferences To represent preferences that guide action, we need a way to represent preferences of different strength. Kratzer’s (1981) theory of modality is not suitable for this purpose. Suppose, for instance, that Sven desires to finish his paper and that he also wants to lie around all day, doing nothing. Modeling his preferences in the style of Kratzer, the propositions expressed by (12) and (13) would have to be part of Sven’s bouletic ordering source assigned to the actual world: (12) Sven finishes his paper. (13) Sven lies around all day, doing nothing. But then, Sven should be equally happy if he does nothing as he is if he finishes his paper. We want to be able to explain why, given his knowledge that (12) and (13) are incompatible, he works on his paper. Intuitively, it is because the preference expressed by (12) is more important than that expressed by (13). 3 We owe this observation to Lauri Karttunen. 4 https://tspace.library.utoronto.ca/citd/RussianHeritage/12.NR/NR.12.html Performative Verbs and Performative Acts 153 Preference Structures Definition 1. A preference structure relative to an information state W is a pair 〈P,≤〉, where P⊆℘(W ) and ≤ is a (weak) partial order on P. We can now define a notion of consistency that is weaker than requiring that all propositions in the preference structure be compatible: Definition 2. A preference structure 〈P,≤〉 is consistent iff for any p,q ∈ P such that p∩q = / 0, either p < q or q < p. Since preference structures are defined relative to an information state W , consistency will require not only logically but also contextually incompatible propositions to be strictly ranked. For example, if W is Sven’s doxastic state, and he knows that (12) and (13) are incompatible, for a bouletic preference structure of his to be consistent it must strictly rank the two propositions. In general, bouletic preference", "title": "" }, { "docid": "neg:1840258_13", "text": "Background : Agile software development has become a popular way of developing software. Scrum is the most frequently used agile framework, but it is often reported to be adapted in practice. Objective: Thus, we aim to understand how Scrum is adapted in different contexts and what are the reasons for these changes. Method : Using a structured interview guideline, we interviewed ten German companies about their concrete usage of Scrum and analysed the results qualitatively. Results: All companies vary Scrum in some way. The least variations are in the Sprint length, events, team size and requirements engineering. Many users varied the roles, effort estimations and quality assurance. Conclusions: Many variations constitute a substantial deviation from Scrum as initially proposed. For some of these variations, there are good reasons. Sometimes, however, the variations are a result of a previous non-agile, hierarchical organisation.", "title": "" }, { "docid": "neg:1840258_14", "text": "Deep learning systems, such as Convolutional Neural Networks (CNNs), can infer a hierarchical representation of input data that facilitates categorization. In this paper, we propose to learn affect-salient features for Speech Emotion Recognition (SER) using semi-CNN. The training of semi-CNN has two stages. In the first stage, unlabeled samples are used to learn candidate features by contractive convolutional neural network with reconstruction penalization. The candidate features, in the second step, are used as the input to semi-CNN to learn affect-salient, discriminative features using a novel objective function that encourages the feature saliency, orthogonality and discrimination. Our experiment results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and environment distortion), and outperforms several well-established SER features.", "title": "" }, { "docid": "neg:1840258_15", "text": "BACKGROUND\nMalignant bowel obstruction is a highly symptomatic, often recurrent, and sometimes refractory condition in patients with intra-abdominal tumor burden. Gastro-intestinal symptoms and function may improve with anti-inflammatory, anti-secretory, and prokinetic/anti-nausea combination medical therapy.\n\n\nOBJECTIVE\nTo describe the effect of octreotide, metoclopramide, and dexamethasone in combination on symptom burden and bowel function in patients with malignant bowel obstruction and dysfunction.\n\n\nDESIGN\nA retrospective case series of patients with malignant bowel obstruction (MBO) and malignant bowel dysfunction (MBD) treated by a palliative care consultation service with octreotide, metoclopramide, and dexamethasone. Outcomes measures were nausea, pain, and time to resumption of oral intake.\n\n\nRESULTS\n12 cases with MBO, 11 had moderate/severe nausea on presentation. 100% of these had improvement in nausea by treatment day #1. 100% of patients with moderate/severe pain improved to tolerable level by treatment day #1. The median time to resumption of oral intake was 2 days (range 1-6 days) in the 8 cases with evaluable data. Of 7 cases with MBD, 6 had For patients with malignant bowel dysfunction, of those with moderate/severe nausea. 5 of 6 had subjective improvement by day#1. Moderate/severe pain improved to tolerable levels in 5/6 by day #1. Of the 4 cases with evaluable data on resumption of PO intake, time to resume PO ranged from 1-4 days.\n\n\nCONCLUSION\nCombination medical therapy may provide rapid improvement in symptoms associated with malignant bowel obstruction and dysfunction.", "title": "" }, { "docid": "neg:1840258_16", "text": "The increasing availability of large-scale location traces creates unprecedent opportunities to change the paradigm for knowledge discovery in transportation systems. A particularly promising area is to extract useful business intelligence, which can be used as guidance for reducing inefficiencies in energy consumption of transportation sectors, improving customer experiences, and increasing business performances. However, extracting business intelligence from location traces is not a trivial task. Conventional data analytic tools are usually not customized for handling large, complex, dynamic, and distributed nature of location traces. To that end, we develop a taxi business intelligence system to explore the massive taxi location traces from different business perspectives with various data mining functions. Since we implement the system using the real-world taxi GPS data, this demonstration will help taxi companies to improve their business performances by understanding the behaviors of both drivers and customers. In addition, several identified technical challenges also motivate data mining people to develop more sophisticate techniques in the future.", "title": "" }, { "docid": "neg:1840258_17", "text": "As population structure can result in spurious associations, it has constrained the use of association studies in human and plant genetics. Association mapping, however, holds great promise if true signals of functional association can be separated from the vast number of false signals generated by population structure. We have developed a unified mixed-model approach to account for multiple levels of relatedness simultaneously as detected by random genetic markers. We applied this new approach to two samples: a family-based sample of 14 human families, for quantitative gene expression dissection, and a sample of 277 diverse maize inbred lines with complex familial relationships and population structure, for quantitative trait dissection. Our method demonstrates improved control of both type I and type II error rates over other methods. As this new method crosses the boundary between family-based and structured association samples, it provides a powerful complement to currently available methods for association mapping.", "title": "" }, { "docid": "neg:1840258_18", "text": "A large number of saliency models, each based on a different hypothesis, have been proposed over the past 20 years. In practice, while subscribing to one hypothesis or computational principle makes a model that performs well on some types of images, it hinders the general performance of a model on arbitrary images and large-scale data sets. One natural approach to improve overall saliency detection accuracy would then be fusing different types of models. In this paper, inspired by the success of late-fusion strategies in semantic analysis and multi-modal biometrics, we propose to fuse the state-of-the-art saliency models at the score level in a para-boosting learning fashion. First, saliency maps generated by several models are used as confidence scores. Then, these scores are fed into our para-boosting learner (i.e., support vector machine, adaptive boosting, or probability density estimator) to generate the final saliency map. In order to explore the strength of para-boosting learners, traditional transformation-based fusion strategies, such as Sum, Min, and Max, are also explored and compared in this paper. To further reduce the computation cost of fusing too many models, only a few of them are considered in the next step. Experimental results show that score-level fusion outperforms each individual model and can further reduce the performance gap between the current models and the human inter-observer model.", "title": "" }, { "docid": "neg:1840258_19", "text": "A low-profile broadband dual-polarized patch subarray is designed in this letter for a highly integrated X-band synthetic aperture radar payload on a small satellite. The proposed subarray is lightweight and has a low profile due to its tile structure realized by a multilayer printed circuit board process. The measured results confirm that the subarray yields 14-dB bandwidths from 9.15 to 10.3 GHz for H-pol and from 9.35 to 10.2 GHz for V-pol. The isolation remains better than 40 dB. The average realized gains are approximately 13 dBi for both polarizations. The sidelobe levels are 25 dB for H-pol and 20 dB for V-pol. The relative cross-polarization levels are  30 dB within the half-power beamwidth range.", "title": "" } ]
1840259
Socially Aware Networking: A Survey
[ { "docid": "pos:1840259_0", "text": "We compare recent approaches to community structure identification in terms of sensitivity and computational cost. The recently proposed modularity measure is revisited and the performance of the methods as applied to ad hoc networks with known community structure, is compared. We find that the most accurate methods tend to be more computationally expensive, and that both aspects need to be considered when choosing a method for practical purposes. The work is intended as an introduction as well as a proposal for a standard benchmark test of community detection methods.", "title": "" } ]
[ { "docid": "neg:1840259_0", "text": "Microblog ranking is a hot research topic in recent years. Most of the related works apply TF-IDF metric for calculating content similarity while neglecting their semantic similarity. And most existing search engines which retrieve the microblog list by string matching the search keywords is not competent to provide a reliable list for users when dealing with polysemy and synonym. Besides, treating all the users with same authority for all topics is intuitively not ideal. In this paper, a comprehensive strategy for microblog ranking is proposed. First, we extend the conventional TF-IDF based content similarity with exploiting knowledge from WordNet. Then, we further incorporate a new feature for microblog ranking that is the topical relation between search keyword and its retrieval. Author topical authority is also incorporated into the ranking framework as an important feature for microblog ranking. Gradient Boosting Decision Tree(GBDT), then is employed to train the ranking model with multiple features involved. We conduct thorough experiments on a large-scale real-world Twitter dataset and demonstrate that our proposed approach outperform a number of existing approaches in discovering higher quality and more related microblogs.", "title": "" }, { "docid": "neg:1840259_1", "text": "In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as well as the user's level of interest in wizardry. User feedback is required to discover these latent product and user dimensions. Such feedback often comes in the form of a numeric rating accompanied by review text. However, traditional methods often discard review text, which makes user and product latent dimensions difficult to interpret, since they ignore the very text that justifies a user's rating. In this paper, we aim to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics (such as those learned by topic models like LDA). Our approach has several advantages. Firstly, we obtain highly interpretable textual labels for latent rating dimensions, which helps us to `justify' ratings with text. Secondly, our approach more accurately predicts product ratings by harnessing the information present in review text; this is especially true for new products and users, who may have too few ratings to model their latent factors, yet may still provide substantial information from the text of even a single review. Thirdly, our discovered topics can be used to facilitate other tasks such as automated genre discovery, and to identify useful and representative reviews.", "title": "" }, { "docid": "neg:1840259_2", "text": "This article reviews a free-energy formulation that advances Helmholtz's agenda to find principles of brain function based on conservation laws and neuronal energy. It rests on advances in statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function. We could have just scratched the surface of what this formulation offers; for example, it is becoming clear that the Bayesian brain is just one facet of the free-energy principle and that perception is an inevitable consequence of active exchange with the environment. Furthermore, one can see easily how constructs like memory, attention, value, reinforcement and salience might disclose their simple relationships within this framework.", "title": "" }, { "docid": "neg:1840259_3", "text": "Wei-yu Kevin Chiang • Dilip Chhajed • James D. Hess Department of Information Systems, University of Maryland at Baltimore County, Baltimore, Maryland 21250 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 kevin@wchiang.net • chhajed@uiuc.edu • jhess@uiuc.edu", "title": "" }, { "docid": "neg:1840259_4", "text": "High performance 3D integration Systems need a higher interconnect density between the die than traditional μbump interconnects can offer. For ultra-fine pitches interconnect pitches below 5μm a different solution is required. This paper describes a hybrid wafer-to-wafer (W2W) bonding approach that uses Cu damascene patterned surface bonding, allowing to scale down the interconnection pitch below 5 μm, potentially even down to 1μm, depending on the achievable W2W bonding accuracy. The bonding method is referred to as hybrid bonding since the bonding of the Cu/dielectric damascene surfaces leads simultaneously to metallic and dielectric bonding. In this paper, the integration flow for 300mm hybrid wafer bonding at 3.6μm and 1.8μm pitch will be described using a novel, alternative, non-oxide Cu/dielectric damascene process. Optimization of the surface preparation before bonding will be discussed. Of particular importance is the wafer chemical-mechanical-polishing (CMP) process and the pre-bonding wafer treatment. Using proper surface activation and very low roughness dielectrics, void-free room temperature bonding can be achieved. High bonding strengths are obtained, even using low temperature anneal (250°C). The process flow also integrates the use of a 5μm diameter, 50μm deep via-middle through-silicon-vias (TSV) to connect the wafer interfaces to the external wafer backside.", "title": "" }, { "docid": "neg:1840259_5", "text": "This paper proposes two RF self-interference cancellation techniques. Their small form-factor enables full-duplex communication links for small-to-medium size portable devices and hence promotes the adoption of full-duplex in mass-market applications and next-generation standards, e.g. IEEE802.11 and 5G. Measured prototype implementations of an electrical balance duplexer and a dual-polarized antenna both achieve >50 dB self-interference suppression at RF, operating in the ISM band at 2.45GHz.", "title": "" }, { "docid": "neg:1840259_6", "text": "Inorganic pyrophosphate (PP(i)) produced by cells inhibits mineralization by binding to crystals. Its ubiquitous presence is thought to prevent \"soft\" tissues from mineralizing, whereas its degradation to P(i) in bones and teeth by tissue-nonspecific alkaline phosphatase (Tnap, Tnsalp, Alpl, Akp2) may facilitate crystal growth. Whereas the crystal binding properties of PP(i) are largely understood, less is known about its effects on osteoblast activity. We have used MC3T3-E1 osteoblast cultures to investigate the effect of PP(i) on osteoblast function and matrix mineralization. Mineralization in the cultures was dose-dependently inhibited by PP(i). This inhibition could be reversed by Tnap, but not if PP(i) was bound to mineral. PP(i) also led to increased levels of osteopontin (Opn) induced via the Erk1/2 and p38 MAPK signaling pathways. Opn regulation by PP(i) was also insensitive to foscarnet (an inhibitor of phosphate uptake) and levamisole (an inhibitor of Tnap enzymatic activity), suggesting that increased Opn levels did not result from changes in phosphate. Exogenous OPN inhibited mineralization, but dephosphorylation by Tnap reversed this effect, suggesting that OPN inhibits mineralization via its negatively charged phosphate residues and that like PP(i), hydrolysis by Tnap reduces its mineral inhibiting potency. Using enzyme kinetic studies, we have shown that PP(i) inhibits Tnap-mediated P(i) release from beta-glycerophosphate (a commonly used source of organic phosphate for culture mineralization studies) through a mixed type of inhibition. In summary, PP(i) prevents mineralization in MC3T3-E1 osteoblast cultures by at least three different mechanisms that include direct binding to growing crystals, induction of Opn expression, and inhibition of Tnap activity.", "title": "" }, { "docid": "neg:1840259_7", "text": "We present highly integrated sensor-actuator-controller units (SAC units), addressing the increasing need for easy to use components in the design of modern high-performance robotic systems. Following strict design principles and an electro-mechanical co-design from the beginning on, our development resulted in highly integrated SAC units. Each SAC unit includes a motor, a gear unit, an IMU, sensors for torque, position and temperature as well as all necessary embedded electronics for control and communication over a high-speed EtherCAT bus. Key design considerations were easy to use interfaces and a robust cabling system. Using slip rings to electrically connect the input and output side, the units allow continuous rotation even when chained along a robotic arm. The experimental validation shows the potential of the new SAC units regarding the design of humanoid robots.", "title": "" }, { "docid": "neg:1840259_8", "text": "DNA replicases are multicomponent machines that have evolved clever strategies to perform their function. Although the structure of DNA is elegant in its simplicity, the job of duplicating it is far from simple. At the heart of the replicase machinery is a heteropentameric AAA+ clamp-loading machine that couples ATP hydrolysis to load circular clamp proteins onto DNA. The clamps encircle DNA and hold polymerases to the template for processive action. Clamp-loader and sliding clamp structures have been solved in both prokaryotic and eukaryotic systems. The heteropentameric clamp loaders are circular oligomers, reflecting the circular shape of their respective clamp substrates. Clamps and clamp loaders also function in other DNA metabolic processes, including repair, checkpoint mechanisms, and cell cycle progression. Twin polymerases and clamps coordinate their actions with a clamp loader and yet other proteins to form a replisome machine that advances the replication fork.", "title": "" }, { "docid": "neg:1840259_9", "text": "Current sports injury reporting systems lack a common conceptual basis. We propose a conceptual foundation as a basis for the recording of health problems associated with participation in sports, based on the notion of impairment used by the World Health Organization. We provide definitions of sports impairment concepts to represent the perspectives of health services, the participants in sports and physical exercise themselves, and sports institutions. For each perspective, the duration of the causative event is used as the norm for separating concepts into those denoting impairment conditions sustained instantly and those developing gradually over time. Regarding sports impairment sustained in isolated events, 'sports injury' denotes the loss of bodily function or structure that is the object of observations in clinical examinations; 'sports trauma' is defined as an immediate sensation of pain, discomfort or loss of functioning that is the object of athlete self-evaluations; and 'sports incapacity' is the sidelining of an athlete because of a health evaluation made by a legitimate sports authority that is the object of time loss observations. Correspondingly, sports impairment caused by excessive bouts of physical exercise is denoted as 'sports disease' (overuse syndrome) when observed by health service professionals during clinical examinations, 'sports illness' when observed by the athlete in self-evaluations, and 'sports sickness' when recorded as time loss from sports participation by a sports body representative. We propose a concerted development effort in this area that takes advantage of concurrent ontology management resources and involves the international sporting community in building terminology systems that have broad relevance.", "title": "" }, { "docid": "neg:1840259_10", "text": "This paper describes a methodology for automated recognition of complex human activities. The paper proposes a general framework which reliably recognizes high-level human actions and human-human interactions. Our approach is a description-based approach, which enables a user to encode the structure of a high-level human activity as a formal representation. Recognition of human activities is done by semantically matching constructed representations with actual observations. The methodology uses a context-free grammar (CFG) based representation scheme as a formal syntax for representing composite activities. Our CFG-based representation enables us to define complex human activities based on simpler activities or movements. Our system takes advantage of both statistical recognition techniques from computer vision and knowledge representation concepts from traditional artificial intelligence. In the low-level of the system, image sequences are processed to extract poses and gestures. Based on the recognition of gestures, the high-level of the system hierarchically recognizes composite actions and interactions occurring in a sequence of image frames. The concept of hallucinations and a probabilistic semantic-level recognition algorithm is introduced to cope with imperfect lower-layers. As a result, the system recognizes human activities including ‘fighting’ and ‘assault’, which are high-level activities that previous systems had difficulties. The experimental results show that our system reliably recognizes sequences of complex human activities with a high recognition rate.", "title": "" }, { "docid": "neg:1840259_11", "text": "This paper presents an annotation scheme for adding entity and event target annotations to the MPQA corpus, a rich span-annotated opinion corpus. The new corpus promises to be a valuable new resource for developing systems for entity/event-level sentiment analysis. Such systems, in turn, would be valuable in NLP applications such as Automatic Question Answering. We introduce the idea of entity and event targets (eTargets), describe the annotation scheme, and present the results of an agreement study.", "title": "" }, { "docid": "neg:1840259_12", "text": "OBJECTIVE\nThe objective of this study was to assess the effects of participation in a mindfulness meditation-based stress reduction program on mood disturbance and symptoms of stress in cancer outpatients.\n\n\nMETHODS\nA randomized, wait-list controlled design was used. A convenience sample of eligible cancer patients enrolled after giving informed consent and were randomly assigned to either an immediate treatment condition or a wait-list control condition. Patients completed the Profile of Mood States and the Symptoms of Stress Inventory both before and after the intervention. The intervention consisted of a weekly meditation group lasting 1.5 hours for 7 weeks plus home meditation practice.\n\n\nRESULTS\nNinety patients (mean age, 51 years) completed the study. The group was heterogeneous in type and stage of cancer. Patients' mean preintervention scores on dependent measures were equivalent between groups. After the intervention, patients in the treatment group had significantly lower scores on Total Mood Disturbance and subscales of Depression, Anxiety, Anger, and Confusion and more Vigor than control subjects. The treatment group also had fewer overall Symptoms of Stress; fewer Cardiopulmonary and Gastrointestinal symptoms; less Emotional Irritability, Depression, and Cognitive Disorganization; and fewer Habitual Patterns of stress. Overall reduction in Total Mood Disturbance was 65%, with a 31% reduction in Symptoms of Stress.\n\n\nCONCLUSIONS\nThis program was effective in decreasing mood disturbance and stress symptoms in both male and female patients with a wide variety of cancer diagnoses, stages of illness, and ages. cancer, stress, mood, intervention, mindfulness.", "title": "" }, { "docid": "neg:1840259_13", "text": "Analysis of flows such as human movement can help spatial planners better understand territorial patterns in urban environments. In this paper, we describe FlowSampler, an interactive visual interface designed for spatial planners to gather, extract and analyse human flows in geolocated social media data. Our system adopts a graph-based approach to infer movement pathways from spatial point type data and expresses the resulting information through multiple linked multiple visualisations to support data exploration. We describe two use cases to demonstrate the functionality of our system and characterise how spatial planners utilise it to address analytical task.", "title": "" }, { "docid": "neg:1840259_14", "text": "The DENDRAL and Meta-DENDRAL programs are products of a large, interdisciplinary group of Stanford University scientists concerned with many and highly varied aspects of the mechanization ofscientific reasoningand theformalization of scientific knowledge for this purpose. An early motivation for our work was to explore the power of existing AI methods, such as heuristic search, for reasoning in difficult scientific problems [7]. Another concern has been to exploit the AI methodology to understand better some fundamental questions in the philosophy of science, for example the processes by which explanatory hypotheses are discovered or judged adequate [18]. From the start, the project has had an applications dimension [9, 10, 27]. It has sought to develop \"expert level\" agents to assist in the solution ofproblems in their discipline that require complex symbolic reasoning. The applications dimension is the focus of this paper. In order to achieve high performance, the DENDRAL programs incorporate large amounts ofknowledge about the area of science to which they are applied, structure elucidation in organic chemistry. A \"smart assistant\" for a chemist needs tobe able toperform many tasks as well as an expert, but need not necessarily understand the domain at the same theoretical level as the expert. The over-all structure elucidation task is described below (Section 2) followed by a description of the role of the DENDRAL programs within that framework (Section 3). The Meta-DENDRAL programs (Section 4) use a weaker body of knowledge about the domain ofmass spectrometry because their task is to formulate rules of mass spectrometry by induction from empirical data. A strong model of the domain would bias therules unnecessarily.", "title": "" }, { "docid": "neg:1840259_15", "text": "There is a growing interest in studying the adoption of m-payments but literature on the subject is still in its infancy and no empirical research relating to this has been conducted in the context of the UK to date. The aim of this study is to unveil the current situation in m-payment adoption research and provide future research direction through the development of a research model for the examination of factors affecting m-payment adoption in the UK context. Following an extensive search of the literature, this study finds that 186 relationships between independent and dependent variables have been analysed by 32 existing empirical m-payment and m-banking adoption studies. From analysis of these relationships the most significant factors found to influence adoption are uncovered and an extension of UTAUT2 with the addition of perceived risk and trust is proposed to increase the applicability of UTAUT2 to the m-payment context.", "title": "" }, { "docid": "neg:1840259_16", "text": "This paper induces the prominence of variegated machine learning techniques adapted so far for the identifying different network attacks and suggests a preferable Intrusion Detection System (IDS) with the available system resources while optimizing the speed and accuracy. With booming number of intruders and hackers in todays vast and sophisticated computerized world, it is unceasingly challenging to identify unknown attacks in promising time with no false positive and no false negative. Principal Component Analysis (PCA) curtails the amount of data to be compared by reducing their dimensions prior to classification that results in reduction of detection time. In this paper, PCA is adopted to reduce higher dimension dataset to lower dimension dataset. It is accomplished by converting network packet header fields into a vector then PCA applied over high dimensional dataset to reduce the dimension. The reduced dimension dataset is tested with Support Vector Machines (SVM), K-Nearest Neighbors (KNN), J48 Tree algorithm, Random Forest Tree classification algorithm, Adaboost algorihm, Nearest Neighbors generalized Exemplars algorithm, Navebayes probabilistic classifier and Voting Features Interval classification algorithm. Obtained results demonstrates detection accuracy, computational efficiency with minimal false alarms, less system resources utilization. Experimental results are compared with respect to detection rate and detection time and found that TREE classification algorithms achieved superior results over other algorithms. The whole experiment is conducted by using KDD99 data set.", "title": "" }, { "docid": "neg:1840259_17", "text": "The design of a novel and versatile single-port quad-band patch antenna is presented. The antenna is capable of supporting a maximum of four operational sub-bands, with the inherent capability to enhance or suppress any resonance(s) of interest. In addition, circular-polarisation is also achieved at the low frequency band, to demonstrate the polarisation agility. A prototype model of the antenna has been fabricated and its performance experimentally validated. The antenna's single layer and low-profile configuration makes it suitable for mobile user terminals and its cavity-backed feature results in low levels of coupling.", "title": "" }, { "docid": "neg:1840259_18", "text": "A review on various CMOS voltage level shifters is presented in this paper. A voltage level-shifter shifts the level of input voltage to desired output voltage. Voltage Level Shifter circuits are compared with respect to output voltage level, power consumption and delay. Systems often require voltage level translation devices to allow interfacing between integrated circuit devices built from different voltage technologies. The choice of the proper voltage level translation device depends on many factors and will affect the performance and efficiency of the circuit application.", "title": "" }, { "docid": "neg:1840259_19", "text": "Radio-frequency identification (RFID) technology promises to revolutionize the way we track items in supply chain, retail store, and asset management applications. The size and different characteristics of RFID data pose many interesting challenges in the current data management systems. In this paper, we provide a brief overview of RFID technology and highlight a few of the data management challenges that we believe are suitable topics for exploratory research.", "title": "" } ]
1840260
Designing the digital workplace of the future - what scholars recommend to practitioners
[ { "docid": "pos:1840260_0", "text": "Geographically dispersed teams are rarely 100% dispersed. However, by focusing on teams that are either fully dispersed or fully co-located, team research to date has lived on the ends of a spectrum at which relatively few teams may actually work. In this paper, we develop a more robust view of geographic dispersion in teams. Specifically, we focus on the spatialtemporal distances among team members and the configuration of team members across sites (independent of the spatial and temporal distances separating those sites). To better understand the nature of dispersion, we develop a series of five new measures and explore their relationships with communication frequency data from a sample of 182 teams (of varying degrees of dispersion) from a Fortune 500 telecommunications firm. We conclude with recommendations regarding the use of different measures and important questions that they could help address. Geographic Dispersion in Teams 1", "title": "" } ]
[ { "docid": "neg:1840260_0", "text": "BACKGROUND\nRecent findings suggest that the mental health costs of unemployment are related to both short- and long-term mental health scars. The main policy tools for dealing with young people at risk of labor market exclusion are Active Labor Market Policy programs for youths (youth programs). There has been little research on the potential effects of participation in youth programs on mental health and even less on whether participation in such programs alleviates the long-term mental health scarring caused by unemployment. This study compares exposure to open youth unemployment and exposure to youth program participation between ages 18 and 21 in relation to adult internalized mental health immediately after the end of the exposure period at age 21 and two decades later at age 43.\n\n\nMETHODS\nThe study uses a five wave Swedish 27-year prospective cohort study consisting of all graduates from compulsory school in an industrial town in Sweden initiated in 1981. Of the original 1083 participants 94.3% of those alive were still participating at the 27-year follow up. Exposure to open unemployment and youth programs were measured between ages 18-21. Mental health, indicated through an ordinal level three item composite index of internalized mental health symptoms (IMHS), was measured pre-exposure at age 16 and post exposure at ages 21 and 42. Ordinal regressions of internalized mental health at ages 21 and 43 were performed using the Polytomous Universal Model (PLUM). Models were controlled for pre-exposure internalized mental health as well as other available confounders.\n\n\nRESULTS\nResults show strong and significant relationships between exposure to open youth unemployment and IMHS at age 21 (OR = 2.48, CI = 1.57-3.60) as well as at age 43 (OR = 1.71, CI = 1.20-2.43). No such significant relationship is observed for exposure to youth programs at age 21 (OR = 0.95, CI = 0.72-1.26) or at age 43 (OR = 1.23, CI = 0.93-1.63).\n\n\nCONCLUSIONS\nA considered and consistent active labor market policy directed at youths could potentially reduce the short- and long-term mental health costs of youth unemployment.", "title": "" }, { "docid": "neg:1840260_1", "text": "Recently, increasing attention has been directed to the study of the speech emotion recognition, in which global acoustic features of an utterance are mostly used to eliminate the content differences. However, the expression of speech emotion is a dynamic process, which is reflected through dynamic durations, energies, and some other prosodic information when one speaks. In this paper, a novel local dynamic pitch probability distribution feature, which is obtained by drawing the histogram, is proposed to improve the accuracy of speech emotion recognition. Compared with most of the previous works using global features, the proposed method takes advantage of the local dynamic information conveyed by the emotional speech. Several experiments on Berlin Database of Emotional Speech are conducted to verify the effectiveness of the proposed method. The experimental results demonstrate that the local dynamic information obtained with the proposed method is more effective for speech emotion recognition than the traditional global features.", "title": "" }, { "docid": "neg:1840260_2", "text": "In semi-supervised learning, a number of labeled examples are usually required for training an initial weakly useful predictor which is in turn used for exploiting the unlabeled examples. However, in many real-world applications there may exist very few labeled training examples, which makes the weakly useful predictor difficult to generate, and therefore these semisupervised learning methods cannot be applied. This paper proposes a method working under a two-view setting. By taking advantages of the correlations between the views using canonical component analysis, the proposed method can perform semi-supervised learning with only one labeled training example. Experiments and an application to content-based image retrieval validate the effectiveness of the proposed method.", "title": "" }, { "docid": "neg:1840260_3", "text": "This paper describes our system for the CoNLL 2016 Shared Task’s supplementary task on Discourse Relation Sense Classification. Our official submission employs a Logistic Regression classifier with several cross-argument similarity features based on word embeddings and performs with overall F-scores of 64.13 for the Dev set, 63.31 for the Test set and 54.69 for the Blind set, ranking first in the Overall ranking for the task. We compare the feature-based Logistic Regression classifier to different Convolutional Neural Network architectures. After the official submission we enriched our model for Non-Explicit relations by including similarities of explicit connectives with the relation arguments, and part of speech similarities based on modal verbs. This improved our Non-Explicit result by 1.46 points on the Dev set and by 0.36 points on the Blind set.", "title": "" }, { "docid": "neg:1840260_4", "text": "In this paper, a methodology is developed to use data acquisition derived from condition monitoring and standard diagnosis for rehabilitation purposes of transformers. The interpretation and understanding of the test data are obtained from international test standards to determine the current condition of transformers. In an attempt to ascertain monitoring priorities, the effective test methods are selected for transformer diagnosis. In particular, the standardization of diagnostic and analytical techniques are being improved that will enable field personnel to more easily use the test results and will reduce the need for interpretation by experts. In addition, the advanced method has the potential to reduce the time greatly and increase the accuracy of diagnostics. The important aim of the standardization is to develop the multiple diagnostic models that combine results from the different tests and give an overall assessment of reliability and maintenance for transformers.", "title": "" }, { "docid": "neg:1840260_5", "text": "Cannabidiol is a component of marijuana that does not activate cannabinoid receptors, but moderately inhibits the degradation of the endocannabinoid anandamide. We previously reported that an elevation of anandamide levels in cerebrospinal fluid inversely correlated to psychotic symptoms. Furthermore, enhanced anandamide signaling let to a lower transition rate from initial prodromal states into frank psychosis as well as postponed transition. In our translational approach, we performed a double-blind, randomized clinical trial of cannabidiol vs amisulpride, a potent antipsychotic, in acute schizophrenia to evaluate the clinical relevance of our initial findings. Either treatment was safe and led to significant clinical improvement, but cannabidiol displayed a markedly superior side-effect profile. Moreover, cannabidiol treatment was accompanied by a significant increase in serum anandamide levels, which was significantly associated with clinical improvement. The results suggest that inhibition of anandamide deactivation may contribute to the antipsychotic effects of cannabidiol potentially representing a completely new mechanism in the treatment of schizophrenia.", "title": "" }, { "docid": "neg:1840260_6", "text": "We present an algorithm for recognition and reconstruction of scanned 3D indoor scenes. 3D indoor reconstruction is particularly challenging due to object interferences, occlusions and overlapping which yield incomplete yet very complex scene arrangements. Since it is hard to assemble scanned segments into complete models, traditional methods for object recognition and reconstruction would be inefficient. We present a search-classify approach which interleaves segmentation and classification in an iterative manner. Using a robust classifier we traverse the scene and gradually propagate classification information. We reinforce classification by a template fitting step which yields a scene reconstruction. We deform-to-fit templates to classified objects to resolve classification ambiguities. The resulting reconstruction is an approximation which captures the general scene arrangement. Our results demonstrate successful classification and reconstruction of cluttered indoor scenes, captured in just few minutes.", "title": "" }, { "docid": "neg:1840260_7", "text": "Often the challenge associated with tasks like fraud and spam detection is the lack of all likely patterns needed to train suitable supervised learning models. This problem accentuates when the fraudulent patterns are not only scarce, they also change over time. Change in fraudulent pattern is because fraudsters continue to innovate novel ways to circumvent measures put in place to prevent fraud. Limited data and continuously changing patterns makes learning signi cantly di cult. We hypothesize that good behavior does not change with time and data points representing good behavior have consistent spatial signature under di erent groupings. Based on this hypothesis we are proposing an approach that detects outliers in large data sets by assigning a consistency score to each data point using an ensemble of clustering methods. Our main contribution is proposing a novel method that can detect outliers in large datasets and is robust to changing patterns. We also argue that area under the ROC curve, although a commonly used metric to evaluate outlier detection methods is not the right metric. Since outlier detection problems have a skewed distribution of classes, precision-recall curves are better suited because precision compares false positives to true positives (outliers) rather than true negatives (inliers) and therefore is not a ected by the problem of class imbalance. We show empirically that area under the precision-recall curve is a better than ROC as an evaluation metric. The proposed approach is tested on the modi ed version of the Landsat satellite dataset, the modi ed version of the ann-thyroid dataset and a large real world credit card fraud detection dataset available through Kaggle where we show signi cant improvement over the baseline methods.", "title": "" }, { "docid": "neg:1840260_8", "text": "Recently new data center topologies have been proposed that offer higher aggregate bandwidth and location independence by creating multiple paths in the core of the network. To effectively use this bandwidth requires ensuring different flows take different paths, which poses a challenge.\n Plainly put, there is a mismatch between single-path transport and the multitude of available network paths. We propose a natural evolution of data center transport from TCP to multipath TCP. We show that multipath TCP can effectively and seamlessly use available bandwidth, providing improved throughput and better fairness in these new topologies when compared to single path TCP and randomized flow-level load balancing. We also show that multipath TCP outperforms laggy centralized flow scheduling without needing centralized control or additional infrastructure.", "title": "" }, { "docid": "neg:1840260_9", "text": "Acknowledgements This paper has benefited from conversations and collaborations with colleagues, including most notably Stefan Dercon, Cheryl Doss, and Chris Udry. None of them has read this manuscript, however, and they are not responsible for the views expressed here. Steve Wiggins provided critical comments on the first draft of the document and persuaded me to rethink a number of points. The aim of the Natural Resources Group is to build partnerships, capacity and wise decision-making for fair and sustainable use of natural resources. Our priority in pursuing this purpose is on local control and management of natural resources and other ecosystems. The Institute of Development Studies (IDS) is a leading global Institution for international development research, teaching and learning, and impact and communications, based at the University of Sussex. Its vision is a world in which poverty does not exist, social justice prevails and sustainable economic growth is focused on improving human wellbeing. The Overseas Development Institute (ODI) is a leading independent think tank on international development and humanitarian issues. Its mission is to inspire and inform policy and practice which lead to the reduction of poverty, the alleviation of suffering and the achievement of sustainable livelihoods. Smallholder agriculture has long served as the dominant economic activity for people in sub-Saharan Africa, and it will remain enormously important for the foreseeable future. But the size of the sector does not necessarily imply that investments in the smallholder sector will yield high social benefits in comparison to other possible uses of development resources. Large changes could potentially affect the viability of smallholder systems, emanating from shifts in technology, markets, climate and the global environment. The priorities for development policy will vary across and within countries due to the highly heterogeneous nature of the smallholder sector.", "title": "" }, { "docid": "neg:1840260_10", "text": "The increasing activity in the Intelligent Transportation Systems (ITS) area faces a strong limitation: the slow pace at which the automotive industry is making cars \"smarter\". On the contrary, the smartphone industry is advancing quickly. Existing smartphones are endowed with multiple wireless interfaces and high computational power, being able to perform a wide variety of tasks. By combining smartphones with existing vehicles through an appropriate interface we are able to move closer to the smart vehicle paradigm, offering the user new functionalities and services when driving. In this paper we propose an Android-based application that monitors the vehicle through an On Board Diagnostics (OBD-II) interface, being able to detect accidents. Our proposed application estimates the G force experienced by the passengers in case of a frontal collision, which is used together with airbag triggers to detect accidents. The application reacts to positive detection by sending details about the accident through either e-mail or SMS to pre-defined destinations, immediately followed by an automatic phone call to the emergency services. Experimental results using a real vehicle show that the application is able to react to accident events in less than 3 seconds, a very low time, validating the feasibility of smartphone based solutions for improving safety on the road.", "title": "" }, { "docid": "neg:1840260_11", "text": "A number of resource-intensive applications, such as augmented reality, natural language processing, object recognition, and multimedia-based software are pushing the computational and energy boundaries of smartphones. Cloud-based services augment the resource-scare capabilities of smartphones while offloading compute-intensive methods to resource-rich cloud servers. The amalgam of cloud and mobile computing technologies has ushered the rise of Mobile Cloud Computing (MCC) paradigm which envisions operating smartphones and modern mobile devices beyond their intrinsic capabilities. System virtualization, application virtualization, and dynamic binary translation (DBT) techniques are required to address the heterogeneity of smartphone and cloud architectures. However, most of the current research work has only focused on the offloading of virtualized applications while giving limited consideration to native code offloading. Moreover, researchers have not attended to the requirements of multimedia based applications in MCC offloading frameworks. In this study, we present a survey and taxonomy of state-of-the-art MCC frameworks, DBT techniques for native offloading, and cross-platform execution techniques for multimedia based applications. We survey the MCC frameworks from the perspective of offload enabling techniques. We focus on native code offloading frameworks and analyze the DBT and emulation techniques of smartphones (ARM) on a cloud server (x86) architectures. Furthermore, we debate the open research issues and challenges to native offloading of multimedia based smartphone applications.", "title": "" }, { "docid": "neg:1840260_12", "text": "The contribution of power production by photovoltaic (PV) systems to the electricity supply is constantly increasing. An efficient use of the fluctuating solar power production will highly benefit from forecast information on the expected power production. This forecast information is necessary for the management of the electricity grids and for solar energy trading. This paper presents an approach to predict regional PV power output based on forecasts up to three days ahead provided by the European Centre for Medium-Range Weather Forecasts (ECMWF). Focus of the paper is the description and evaluation of the approach of irradiance forecasting, which is the basis for PV power prediction. One day-ahead irradiance forecasts for single stations in Germany show a rRMSE of 36%. For regional forecasts, forecast accuracy is increasing in dependency on the size of the region. For the complete area of Germany, the rRMSE amounts to 13%. Besides the forecast accuracy, also the specification of the forecast uncertainty is an important issue for an effective application. We present and evaluate an approach to derive weather specific prediction intervals for irradiance forecasts. The accuracy of PV power prediction is investigated in a case study.", "title": "" }, { "docid": "neg:1840260_13", "text": "We present novel empirical observations regarding how stochastic gradient descent (SGD) navigates the loss landscape of over-parametrized deep neural networks (DNNs). These observations expose the qualitatively different roles of learning rate and batch-size in DNN optimization and generalization. Specifically we study the DNN loss surface along the trajectory of SGD by interpolating the loss surface between parameters from consecutive iterations and tracking various metrics during training. We find that the loss interpolation between parameters before and after each training iteration’s update is roughly convex with a minimum (valley floor) in between for most of the training. Based on this and other metrics, we deduce that for most of the training update steps, SGD moves in valley like regions of the loss surface by jumping from one valley wall to another at a height above the valley floor. This ’bouncing between walls at a height’ mechanism helps SGD traverse larger distance for small batch sizes and large learning rates which we find play qualitatively different roles in the dynamics. While a large learning rate maintains a large height from the valley floor, a small batch size injects noise facilitating exploration. We find this mechanism is crucial for generalization because the valley floor has barriers and this exploration above the valley floor allows SGD to quickly travel far away from the initialization point (without being affected by barriers) and find flatter regions, corresponding to better generalization.", "title": "" }, { "docid": "neg:1840260_14", "text": "The potency of the environment to shape brain function changes dramatically across the lifespan. Neural circuits exhibit profound plasticity during early life and are later stabilized. A focus on the cellular and molecular bases of these developmental trajectories has begun to unravel mechanisms, which control the onset and closure of such critical periods. Two important concepts have emerged from the study of critical periods in the visual cortex: (1) excitatory-inhibitory circuit balance is a trigger; and (2) molecular \"brakes\" limit adult plasticity. The onset of the critical period is determined by the maturation of specific GABA circuits. Targeting these circuits using pharmacological or genetic approaches can trigger premature onset or induce a delay. These manipulations are so powerful that animals of identical chronological age may be at the peak, before, or past their plastic window. Thus, critical period timing per se is plastic. Conversely, one of the outcomes of normal development is to stabilize the neural networks initially sculpted by experience. Rather than being passively lost, the brain's intrinsic potential for plasticity is actively dampened. This is demonstrated by the late expression of brake-like factors, which reversibly limit excessive circuit rewiring beyond a critical period. Interestingly, many of these plasticity regulators are found in the extracellular milieu. Understanding why so many regulators exist, how they interact and, ultimately, how to lift them in noninvasive ways may hold the key to novel therapies and lifelong learning.", "title": "" }, { "docid": "neg:1840260_15", "text": "BACKGROUND\nMost of our social interactions involve perception of emotional information from the faces of other people. Furthermore, such emotional processes are thought to be aberrant in a range of clinical disorders, including psychosis and depression. However, the exact neurofunctional maps underlying emotional facial processing are not well defined.\n\n\nMETHODS\nTwo independent researchers conducted separate comprehensive PubMed (1990 to May 2008) searches to find all functional magnetic resonance imaging (fMRI) studies using a variant of the emotional faces paradigm in healthy participants. The search terms were: \"fMRI AND happy faces,\" \"fMRI AND sad faces,\" \"fMRI AND fearful faces,\" \"fMRI AND angry faces,\" \"fMRI AND disgusted faces\" and \"fMRI AND neutral faces.\" We extracted spatial coordinates and inserted them in an electronic database. We performed activation likelihood estimation analysis for voxel-based meta-analyses.\n\n\nRESULTS\nOf the originally identified studies, 105 met our inclusion criteria. The overall database consisted of 1785 brain coordinates that yielded an overall sample of 1600 healthy participants. Quantitative voxel-based meta-analysis of brain activation provided neurofunctional maps for 1) main effect of human faces; 2) main effect of emotional valence; and 3) modulatory effect of age, sex, explicit versus implicit processing and magnetic field strength. Processing of emotional faces was associated with increased activation in a number of visual, limbic, temporoparietal and prefrontal areas; the putamen; and the cerebellum. Happy, fearful and sad faces specifically activated the amygdala, whereas angry or disgusted faces had no effect on this brain region. Furthermore, amygdala sensitivity was greater for fearful than for happy or sad faces. Insular activation was selectively reported during processing of disgusted and angry faces. However, insular sensitivity was greater for disgusted than for angry faces. Conversely, neural response in the visual cortex and cerebellum was observable across all emotional conditions.\n\n\nLIMITATIONS\nAlthough the activation likelihood estimation approach is currently one of the most powerful and reliable meta-analytical methods in neuroimaging research, it is insensitive to effect sizes.\n\n\nCONCLUSION\nOur study has detailed neurofunctional maps to use as normative references in future fMRI studies of emotional facial processing in psychiatric populations. We found selective differences between neural networks underlying the basic emotions in limbic and insular brain regions.", "title": "" }, { "docid": "neg:1840260_16", "text": "Patterns of reading development were examined in native English-speaking (L1) children and children who spoke English as a second language (ESL). Participants were 978 (790 L1 speakers and 188 ESL speakers) Grade 2 children involved in a longitudinal study that began in kindergarten. In kindergarten and Grade 2, participants completed standardized and experimental measures including reading, spelling, phonological processing, and memory. All children received phonological awareness instruction in kindergarten and phonics instruction in Grade 1. By the end of Grade 2, the ESL speakers' reading skills were comparable to those of L1 speakers, and ESL speakers even outperformed L1 speakers on several measures. The findings demonstrate that a model of early identification and intervention for children at risk is beneficial for ESL speakers and also suggest that the effects of bilingualism on the acquisition of early reading skills are not negative and may be positive.", "title": "" }, { "docid": "neg:1840260_17", "text": "A smart connected car in conjunction with the Internet of Things (IoT) is an emerging topic. The fundamental concept of the smart connected car is connectivity, and such connectivity can be provided by three aspects, such as Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I), and Vehicle-to-Everything (V2X). To meet the aspects of V2V and V2I connectivity, we developed modules in accordance with international standards with respect to On-Board Diagnostics II (OBDII) and 4G Long Term Evolution (4G-LTE) to obtain and transmit vehicle information. We also developed software to visually check information provided by our modules. Information related to a user’s driving, which is transmitted to a cloud-based Distributed File System (DFS), was then analyzed for the purpose of big data analysis to provide information on driving habits to users. Yet, since this work is an ongoing research project, we focus on proposing an idea of system architecture and design in terms of big data analysis. Therefore, our contributions through this work are as follows: (1) Develop modules based on Controller Area Network (CAN) bus, OBDII, and 4G-LTE; (2) Develop software to check vehicle information on a PC; (3) Implement a database related to vehicle diagnostic codes; (4) Propose system architecture and design for big data analysis.", "title": "" }, { "docid": "neg:1840260_18", "text": "Inter-cell interference is the main obstacle for increasing the network capacity of Long-Term Evolution Advanced (LTE-A) system. Interference Cancellation (IC) is a promising way to improve spectral efficiency. 3rd Generation Partnership Project (3GPP) raises a new research project — Network-Assisted Interference Cancellation and Suppression (NAICS) in LTE Rel-12. Advanced receivers used in NAICS include maximum likelihood (ML) receiver and symbol-level IC (SLIC) receiver. Those receivers need some interference parameters, such as rank indicator (RI), precoding matrix indicator (PMI) and modulation level (MOD). This paper presents a new IC receiver based on detection. We get the clean interfering signal assisted by detection and use it in SLIC. The clean interfering signal makes the estimation of interfering transmitted signal more accurate. So interference cancellation would be more significant. We also improve the method of interference parameter estimation that avoids estimating power boosting and precoding matrix simultaneously. The simulation results show that the performance of proposed SLIC is better than traditional SLIC and close to ML.", "title": "" }, { "docid": "neg:1840260_19", "text": "This paper summarizes the effect of age, gender and race on Electrocardiographic parameters. The conduction system and heart muscles get degenerative changes with advancing age so these parameters get changed. The ECG parameters also changes under certain diseases. Then it is essential to know the normal limits of these parameters for diagnostic purpose under the influence of age, gender and race. The automated ECG analysis systems require normal limits of these parameters. The age and gender of the population clearly influencing the normal limits of ECG parameters. However the investigation of the effect of Body Mass Index on cross section of the population is further warranted.", "title": "" } ]
1840261
Self-Serving Justifications : Doing Wrong and Feeling Moral
[ { "docid": "pos:1840261_0", "text": "Three experiments supported the hypothesis that people are more willing to express attitudes that could be viewed as prejudiced when their past behavior has established their credentials as nonprejudiced persons. In Study 1, participants given the opportunity to disagree with blatantly sexist statements were later more willing to favor a man for a stereotypically male job. In Study 2, participants who first had the opportunity to select a member of a stereotyped group (a woman or an African American) for a category-neutral job were more likely to reject a member of that group for a job stereotypically suited for majority members. In Study 3, participants who had established credentials as nonprejudiced persons revealed a greater willingness to express a politically incorrect opinion even when the audience was unaware of their credentials. The general conditions under which people feel licensed to act on illicit motives are discussed.", "title": "" } ]
[ { "docid": "neg:1840261_0", "text": "Identifying a patient’s important medical problems requires broad and deep medical expertise, as well as significant time to gather all the relevant facts from the patient’s medical record and assess the clinical importance of the facts in reaching the final conclusion. A patient’s medical problem list is by far the most critical information that a physician uses in treatment and care of a patient. In spite of its critical role, its curation, manual or automated, has been an unmet need in clinical practice. We developed a machine learning technique in IBM Watson to automatically generate a patient’s medical problem list. The machine learning model uses lexical and medical features extracted from a patient’s record using NLP techniques. We show that the automated method achieves 70% recall and 67% precision based on the gold standard that medical experts created on a set of deidentified patient records from a major hospital system in the US. To the best of our knowledge this is the first successful machine learning/NLP method of extracting an open-ended patient’s medical problems from an Electronic Medical Record (EMR). This paper also contributes a methodology for assessing accuracy of a medical problem list generation technique.", "title": "" }, { "docid": "neg:1840261_1", "text": "In many natural languages, there are clear syntactic and/or intonational differences between declarative sentences, which are primarily used to provide information, and interrogative sentences, which are primarily used to request information. Most logical frameworks restrict their attention to the former. Those that are concerned with both usually assume a logical language that makes a clear syntactic distinction between declaratives and interrogatives, and usually assign different types of semantic values to these two types of sentences. A different approach has been taken in recent work on inquisitive semantics. This approach does not take the basic syntactic distinction between declaratives and interrogatives as its starting point, but rather a new notion of meaning that captures both informative and inquisitive content in an integrated way. The standard way to treat the logical connectives in this approach is to associate them with the basic algebraic operations on these new types of meanings. For instance, conjunction and disjunction are treated as meet and join operators, just as in classical logic. This gives rise to a hybrid system, where sentences can be both informative and inquisitive at the same time, and there is no clearcut division between declaratives and interrogatives. It may seem that these two general approaches in the existing literature are quite incompatible. The main aim of this paper is to show that this is not the case. We develop an inquisitive semantics for a logical language that has a clearcut division between declaratives and interrogatives. We show that this language coincides in expressive power with the hybrid language that is standardly assumed in inquisitive semantics, we establish a sound and complete axiomatization for the associated logic, and we consider a natural enrichment of the system with presuppositional interrogatives.", "title": "" }, { "docid": "neg:1840261_2", "text": "The hybrid runtime (HRT) model offers a plausible path towards high performance and efficiency. By integrating the OS kernel, parallel runtime, and application, an HRT allows the runtime developer to leverage the full privileged feature set of the hardware and specialize OS services to the runtime's needs. However, conforming to the HRT model currently requires a complete port of the runtime and application to the kernel level, for example to our Nautilus kernel framework, and this requires knowledge of kernel internals. In response, we developed Multiverse, a system that bridges the gap between a built-from-scratch HRT and a legacy runtime system. Multiverse allows existing, unmodified applications and runtimes to be brought into the HRT model without any porting effort whatsoever. Developers simply recompile their package with our compiler toolchain, and Multiverse automatically splits the execution of the application between the domains of a legacy OS and an HRT environment. To the user, the package appears to run as usual on Linux, but the bulk of it now runs as a kernel. The developer can then incrementally extend the runtime and application to take advantage of the HRT model. We describe the design and implementation of Multiverse, and illustrate its capabilities using the Racket runtime system.", "title": "" }, { "docid": "neg:1840261_3", "text": "We constructed a face database PF01(Postech Faces '01). PF01 contains the true-color face images of 103 people, 53 men and 50 women, representing 17 various images (1 normal face, 4 illumination variations, 8 pose variations, 4 expression variations) per person. All of the people in the database are Asians. There are three kinds of systematic variations, such as illumination, pose, and expression variations in the database. The database is expected to be used to evaluate the technology of face recognition for Asian people or for people with systematic variations.", "title": "" }, { "docid": "neg:1840261_4", "text": "Sentiment Analysis is new way of machine learning to extract opinion orientation (positive, negative, neutral) from a text segment written for any product, organization, person or any other entity. Sentiment Analysis can be used to predict the mood of people that have impact on stock prices, therefore it can help in prediction of actual stock movement. In order to exploit the benefits of sentiment analysis in stock market industry we have performed sentiment analysis on tweets related to Apple products, which are extracted from StockTwits (a social networking site) from 2010 to 2017. Along with tweets, we have also used market index data which is extracted from Yahoo Finance for the same period. The sentiment score of a tweet is calculated by sentiment analysis of tweets through SVM. As a result each tweet is categorized as bullish or bearish. Then sentiment score and market data is used to build a SVM model to predict next day's stock movement. Results show that there is positive relation between people opinion and market data and proposed work has an accuracy of 76.65% in stock prediction.", "title": "" }, { "docid": "neg:1840261_5", "text": "In this work, glass fiber reinforced epoxy composites were fabricated. Epoxy resin was used as polymer matrix material and glass fiber was used as reinforcing material. The main focus of this work was to fabricate this composite material by the cheapest and easiest way. For this, hand layup method was used to fabricate glass fiber reinforced epoxy resin composites and TiO2 material was used as filler material. Six types of compositions were made with and without filler material keeping the glass fiber constant and changing the epoxy resin with respect to filler material addition. Mechanical properties such as tensile, impact, hardness, compression and flexural properties were investigated. Additionally, microscopic analysis was done. The experimental investigations show that without filler material the composites exhibit overall lower value in mechanical properties than with addition of filler material in the composites. The results also show that addition of filler material increases the mechanical properties but highest values were obtained for different filler material addition. From the obtained results, it was observed that composites filled by 15wt% of TiO2 particulate exhibited maximum tensile strength, 20wt% of TiO2 particulate exhibited maximum impact strength, 25wt% of TiO2 particulate exhibited maximum hardness value, 25wt% of TiO2 particulate exhibited maximum compressive strength, 20wt% of TiO2 particulate exhibited maximum flexural strength.", "title": "" }, { "docid": "neg:1840261_6", "text": "A classical issue in many applied fields is to obtain an approximating surface to a given set of data points. This problem arises in Computer-Aided Design and Manufacturing (CAD/CAM), virtual reality, medical imaging, computer graphics, computer animation, and many others. Very often, the preferred approximating surface is polynomial, usually described in parametric form. This leads to the problem of determining suitable parametric values for the data points, the so-called surface parameterization. In real-world settings, data points are generally irregularly sampled and subjected to measurement noise, leading to a very difficult nonlinear continuous optimization problem, unsolvable with standard optimization techniques. This paper solves the parameterization problem for polynomial Bézier surfaces by applying the firefly algorithm, a powerful nature-inspired metaheuristic algorithm introduced recently to address difficult optimization problems. The method has been successfully applied to some illustrative examples of open and closed surfaces, including shapes with singularities. Our results show that the method performs very well, being able to yield the best approximating surface with a high degree of accuracy.", "title": "" }, { "docid": "neg:1840261_7", "text": "Estimation, recognition, and near-future prediction of 3D trajectories based on their two dimensional projections available from one camera source is an exceptionally difficult problem due to uncertainty in the trajectories and environment, high dimensionality of the specific trajectory states, lack of enough labeled data and so on. In this article, we propose a solution to solve this problem based on a novel deep learning model dubbed disjunctive factored four-way conditional restricted Boltzmann machine (DFFW-CRBM). Our method improves state-of-the-art deep learning techniques for high dimensional time-series modeling by introducing a novel tensor factorization capable of driving forth order Boltzmann machines to considerably lower energy levels, at no computational costs. DFFW-CRBMs are capable of accurately estimating, recognizing, and performing near-future prediction of three-dimensional trajectories from their 2D projections while requiring limited amount of labeled data. We evaluate our method on both simulated and real-world data, showing its effectiveness in predicting and classifying complex ball trajectories and human activities.", "title": "" }, { "docid": "neg:1840261_8", "text": "Recent advances in combining deep neural network architectures with reinforcement learning techniques have shown promising potential results in solving complex control problems with high dimensional state and action spaces. Inspired by these successes, in this paper, we build two kinds of reinforcement learning algorithms: deep policy-gradient and value-function based agents which can predict the best possible traffic signal for a traffic intersection. At each time step, these adaptive traffic light control agents receive a snapshot of the current state of a graphical traffic simulator and produce control signals. The policy-gradient based agent maps its observation directly to the control signal, however the value-function based agent first estimates values for all legal control signals. The agent then selects the optimal control action with the highest value. Our methods show promising results in a traffic network simulated in the SUMO traffic simulator, without suffering from instability issues during the training process.", "title": "" }, { "docid": "neg:1840261_9", "text": "For the first time in history, it is possible to study human behavior on great scale and in fine detail simultaneously. Online services and ubiquitous computational devices, such as smartphones and modern cars, record our everyday activity. The resulting Big Data offers unprecedented opportunities for tracking and analyzing behavior. This paper hypothesizes the applicability and impact of Big Data technologies in the context of psychometrics both for research and clinical applications. It first outlines the state of the art, including the severe shortcomings with respect to quality and quantity of the resulting data. It then presents a technological vision, comprised of (i) numerous data sources such as mobile devices and sensors, (ii) a central data store, and (iii) an analytical platform, employing techniques from data mining and machine learning. To further illustrate the dramatic benefits of the proposed methodologies, the paper then outlines two current projects, logging and analyzing smartphone usage. One such study attempts to thereby quantify severity of major depression dynamically; the other investigates (mobile) Internet Addiction. Finally, the paper addresses some of the ethical issues inherent to Big Data technologies. In summary, the proposed approach is about to induce the single biggest methodological shift since the beginning of psychology or psychiatry. The resulting range of applications will dramatically shape the daily routines of researches and medical practitioners alike. Indeed, transferring techniques from computer science to psychiatry and psychology is about to establish Psycho-Informatics, an entire research direction of its own.", "title": "" }, { "docid": "neg:1840261_10", "text": "Of current interest are the causal attributions offered by depressives for the good and bad events in their lives. One important attributional account of depression is the reformulated learned helplessness model, which proposes that depressive symptoms are associated with an attributional style in which uncontrollable bad events are attributed to internal (versus external), stable (versus unstable), and global (versus specific) causes. We describe the Attributional Style Questionnaire, which measures individual differences in the use of these attributional dimensions. We report means, reliabilities, intercorrelations, and test-retest stabilities for a sample of 130 undergraduates. Evidence for the questionnaire's validity is discussed. The Attributional Style Questionnaire promises to be a reliable and valid instrument.", "title": "" }, { "docid": "neg:1840261_11", "text": "The paper presents a new generation of torque-controlled li ghtweight robots (LWR) developed at the Institute of Robotics and Mechatronics of the German Aerospace Center . I order to act in unstructured environments and interact with humans, the robots have design features an d co trol/software functionalities which distinguish them from classical robots, such as: load-to-weight ratio o f 1:1, torque sensing in the joints, active vibration damping, sensitive collision detection, as well as complia nt control on joint and Cartesian level. Due to the partially unknown properties of the environment, robustne s of planning and control with respect to environmental variations is crucial. After briefly describing the main har dware features, the paper focuses on showing how joint torque sensing (as a main feature of the robot) is conse quently used for achieving the above mentioned performance, safety, and robustness properties.", "title": "" }, { "docid": "neg:1840261_12", "text": "X-ray images are the essential aiding means all along in clinical diagnosis of fracture. So the processing and analysis of X-ray fracture images is particularly important. Extracting the features of X-ray images is a very important process in classifying fracture images according to the principle of AO classification of fractures. A proposed algorithm is used in this paper. First, use marker-controlled watershed transform based on gradient and homotopy modification to segment X-ray fracture images. Then the features consisted of region number, region area, region centroid and protuberant polygon of fracture image are extracted by marker processing and regionprops function. Next we use Hough transform to detect and extract lines in the protuberant polygon of X-ray fracture image. The lines are consisted of fracture line and parallel lines of centerline. Through the parallel lines of centerline, we obtain centerline over centroid and perpendicular line of centerline over centroid. Finally compute the angle between fracture line and perpendicular line of centerline. This angle can be used to classify femur backbone fracture.", "title": "" }, { "docid": "neg:1840261_13", "text": "Support Vector Machines (SVMs) have successfully shown efficiencies in many areas such as text categorization. Although recommendation systems share many similarities with text categorization, the performance of SVMs in recommendation systems is not acceptable due to the sparsity of the user-item matrix. In this paper, we propose a heuristic method to improve the predictive accuracy of SVMs by repeatedly correcting the missing values in the user-item matrix. The performance comparison to other algorithms has been conducted. The experimental studies show that the accurate rates of our heuristic method are the highest.", "title": "" }, { "docid": "neg:1840261_14", "text": "We develop a consumer-level model of vehicle choice to investigate the reasons behind the erosion of the U.S. automobile manufacturers’ market share during the past decade. Our model accounts for the influence of vehicle attributes, brand loyalty, product line characteristics, and dealerships on choice. We find that nearly all of the loss in market share for U.S. manufacturers can be explained by changes in the basic attributes of a vehicle: price, size, power, operating cost, and body type. During the past decade, U.S. manufacturers have improved their vehicles’ attributes but not as much as Japanese and European manufacturers have improved the attributes of their vehicles.", "title": "" }, { "docid": "neg:1840261_15", "text": "Modern PWM inverter output voltage has high dv/dt, which causes problems such as voltage doubling that can lead to insulation failure, ground currents that results in electromagnetic interference concerns. The IGBT switching device used in such inverter are becoming faster, exacerbating these problems. This paper proposes a new procedure for designing the LC clamp filter. The filter increases the rise time of the output voltage of inverter, resulting in smaller dv/dt. In addition suitable selection of resonance frequency gives LCL filter configuration with improved attenuation. By adding this filter at output terminal of inverter which uses long cable, voltage doubling effect is reduced at the motor terminal. The design procedure is carried out in terms of the power converter based per unit scheme. This generalizes the design procedure to a wide range of power level and to study optimum designs. The effectiveness of the design is verified by computer simulation and experimental measurements.", "title": "" }, { "docid": "neg:1840261_16", "text": "We present a pattern recognition framework for semantic segmentation of visual structures, that is, multi-class labelling at pixel level, and apply it to the task of segmenting organs in the eviscerated viscera from slaughtered poultry in RGB-D images. This is a step towards replacing the current strenuous manual inspection at poultry processing plants. Features are extracted from feature maps such as activation maps from a convolutional neural network (CNN). A random forest classifier assigns class probabilities, which are further refined by utilizing context in a conditional random field. The presented method is compatible with both 2D and 3D features, which allows us to explore the value of adding 3D and CNN-derived features. The dataset consists of 604 RGB-D images showing 151 unique sets of eviscerated viscera from four different perspectives. A mean Jaccard index of 78.11 % is achieved across the four classes of organs by using features derived from 2D, 3D and a CNN, compared to 74.28 % using only basic 2D image features.", "title": "" }, { "docid": "neg:1840261_17", "text": "Demand for high-speed DRAM in graphics application pushes a single-ended I/O signaling to operate up to 6Gb/s. To maintain the speed increase, the GDDR5 specification shifts from GDDR3/4 with respect to forwarded clocking, data training for write and read de-skewing, clock training, channel-error detection, bank group and data coding. This work tackles challenges in GDDR5 such as clock jitter and signal integrity.", "title": "" }, { "docid": "neg:1840261_18", "text": "Acute myeloid leukemia (AML) is the most common type of acute leukemia in adults. AML is a heterogeneous malignancy characterized by distinct genetic and epigenetic abnormalities. Recent genome-wide DNA methylation studies have highlighted an important role of dysregulated methylation signature in AML from biological and clinical standpoint. In this review, we will outline the recent advances in the methylome study of AML and overview the impacts of DNA methylation on AML diagnosis, treatment, and prognosis.", "title": "" }, { "docid": "neg:1840261_19", "text": "Given a text description, most existing semantic parsers synthesize a program in one shot. However, it is quite challenging to produce a correct program solely based on the description, which in reality is often ambiguous or incomplete. In this paper, we investigate interactive semantic parsing, where the agent can ask the user clarification questions to resolve ambiguities via a multi-turn dialogue, on an important type of programs called “If-Then recipes.” We develop a hierarchical reinforcement learning (HRL) based agent that significantly improves the parsing performance with minimal questions to the user. Results under both simulation and human evaluation show that our agent substantially outperforms non-interactive semantic parsers and rule-based agents.", "title": "" } ]
1840262
Microstrip high-pass filter with attenuation poles using cross-coupling
[ { "docid": "pos:1840262_0", "text": "A method to design low-pass filters (LPF) having a defected ground structure (DGS) and broadened transmission-line elements is proposed. The previously presented technique for obtaining a three-stage LPF using DGS by Lim et al. is generalized to propose a method that can be applied in design N-pole LPFs for N/spl les/5. As an example, a five-pole LPF having a DGS is designed and measured. Accurate curve-fitting results and the successive design process to determine the required size of the DGS corresponding to the LPF prototype elements are described. The proposed LPF having a DGS, called a DGS-LPF, includes transmission-line elements with very low impedance instead of open stubs in realizing the required shunt capacitance. Therefore, open stubs, teeor cross-junction elements, and high-impedance line sections are not required for the proposed LPF, while they all have been essential in conventional LPFs. Due to the widely broadened transmission-line elements, the size of the DGS-LPF is compact.", "title": "" } ]
[ { "docid": "neg:1840262_0", "text": "to name a few. Because of its importance to the study of emotion, a number of observer-based systems of facial expression measurement have been developed Using FACS and viewing video-recorded facial behavior at frame rate and slow motion, coders can manually code nearly all possible facial expressions, which are decomposed into action units (AUs). Action units, with some qualifications , are the smallest visually discriminable facial movements. By comparison, other systems are less thorough (Malatesta et al., 1989), fail to differentiate between some anatomically distinct movements (Oster, Hegley, & Nagel, 1992), consider movements that are not anatomically distinct as separable (Oster et al., 1992), and often assume a one-to-one mapping between facial expression and emotion (for a review of these systems, see Cohn & Ekman, in press). Unlike systems that use emotion labels to describe expression , FACS explicitly distinguishes between facial actions and inferences about what they mean. FACS itself is descriptive and includes no emotion-specified descriptors. Hypotheses and inferences about the emotional meaning of facial actions are extrinsic to FACS. If one wishes to make emotion based inferences from FACS codes, a variety of related resources exist. These include the FACS Investigators' Guide These resources suggest combination rules for defining emotion-specified expressions from FACS action units, but this inferential step remains extrinsic to FACS. Because of its descriptive power, FACS is regarded by many as the standard measure for facial behavior and is used widely in diverse fields. Beyond emotion science, these include facial neuromuscular disorders", "title": "" }, { "docid": "neg:1840262_1", "text": "This article describes a number of log-linear parsing models for an automatically extracted lexicalized grammar. The models are full parsing models in the sense that probabilities are defined for complete parses, rather than for independent events derived by decomposing the parse tree. Discriminative training is used to estimate the models, which requires incorrect parses for each sentence in the training data as well as the correct parse. The lexicalized grammar formalism used is Combinatory Categorial Grammar (CCG), and the grammar is automatically extracted from CCGbank, a CCG version of the Penn Treebank. The combination of discriminative training and an automatically extracted grammar leads to a significant memory requirement (up to 25 GB), which is satisfied using a parallel implementation of the BFGS optimization algorithm running on a Beowulf cluster. Dynamic programming over a packed chart, in combination with the parallel implementation, allows us to solve one of the largest-scale estimation problems in the statistical parsing literature in under three hours. A key component of the parsing system, for both training and testing, is a Maximum Entropy supertagger which assigns CCG lexical categories to words in a sentence. The supertagger makes the discriminative training feasible, and also leads to a highly efficient parser. Surprisingly, given CCG's spurious ambiguity, the parsing speeds are significantly higher than those reported for comparable parsers in the literature. We also extend the existing parsing techniques for CCG by developing a new model and efficient parsing algorithm which exploits all derivations, including CCG's nonstandard derivations. This model and parsing algorithm, when combined with normal-form constraints, give state-of-the-art accuracy for the recovery of predicate-argument dependencies from CCGbank. The parser is also evaluated on DepBank and compared against the RASP parser, outperforming RASP overall and on the majority of relation types. The evaluation on DepBank raises a number of issues regarding parser evaluation. This article provides a comprehensive blueprint for building a wide-coverage CCG parser. We demonstrate that both accurate and highly efficient parsing is possible with CCG.", "title": "" }, { "docid": "neg:1840262_2", "text": "High parallel framework has been proved to be very suitable for graph processing. There are various work to optimize the implementation in FPGAs, a pipeline parallel device. The key to make use of the parallel performance of FPGAs is to process graph data in pipeline model and take advantage of on-chip memory to realize necessary locality process. This paper proposes a modularize graph processing framework, which focus on the whole executing procedure with the extremely different degree of parallelism. The framework has three contributions. First, the combination of vertex-centric and edge-centric processing framework can been adjusting in the executing procedure to accommodate top-down algorithm and bottom-up algorithm. Second, owing to the pipeline parallel and finite on-chip memory accelerator, the novel edge-block, a block consist of edges vertex, achieve optimizing the way to utilize the on-chip memory to group the edges and stream the edges in a block to realize the stream pattern to pipeline parallel processing. Third, depending to the analysis of the block structure of nature graph and the executing characteristics during graph processing, we design a novel conversion dispatcher to change processing module, to match the corresponding exchange point. Our evaluation with four graph applications on five diverse scale graph shows that .", "title": "" }, { "docid": "neg:1840262_3", "text": "Many digital images contain blurred regions which are caused by motion or defocus. Automatic detection and classification of blurred image regions are very important for different multimedia analyzing tasks. This paper presents a simple and effective automatic image blurred region detection and classification technique. In the proposed technique, blurred image regions are first detected by examining singular value information for each image pixels. The blur types (i.e. motion blur or defocus blur) are then determined based on certain alpha channel constraint that requires neither image deblurring nor blur kernel estimation. Extensive experiments have been conducted over a dataset that consists of 200 blurred image regions and 200 image regions with no blur that are extracted from 100 digital images. Experimental results show that the proposed technique detects and classifies the two types of image blurs accurately. The proposed technique can be used in many different multimedia analysis applications such as image segmentation, depth estimation and information retrieval.", "title": "" }, { "docid": "neg:1840262_4", "text": "This paper proposes a novel attention model for semantic segmentation, which aggregates multi-scale and context features to refine prediction. Specifically, the skeleton convolutional neural network framework takes in multiple different scales inputs, by which means the CNN can get representations in different scales. The proposed attention model will handle the features from different scale streams respectively and integrate them. Then location attention branch of the model learns to softly weight the multi-scale features at each pixel location. Moreover, we add an recalibrating branch, parallel to where location attention comes out, to recalibrate the score map per class. We achieve quite competitive results on PASCAL VOC 2012 and ADE20K datasets, which surpass baseline and related works.", "title": "" }, { "docid": "neg:1840262_5", "text": "We used single-pulse transcranial magnetic stimulation of the left primary hand motor cortex and motor evoked potentials of the contralateral right abductor pollicis brevis to probe motor cortex excitability during a standard mental rotation task. Based on previous findings we tested the following hypotheses. (i) Is the hand motor cortex activated more strongly during mental rotation than during reading aloud or reading silently? The latter tasks have been shown to increase motor cortex excitability substantially in recent studies. (ii) Is the recruitment of the motor cortex for mental rotation specific for the judgement of rotated but not for nonrotated Shepard & Metzler figures? Surprisingly, motor cortex activation was higher during mental rotation than during verbal tasks. Moreover, we found strong motor cortex excitability during the mental rotation task but significantly weaker excitability during judgements of nonrotated figures. Hence, this study shows that the primary hand motor area is generally involved in mental rotation processes. These findings are discussed in the context of current theories of mental rotation, and a likely mechanism for the global excitability increase in the primary motor cortex during mental rotation is proposed.", "title": "" }, { "docid": "neg:1840262_6", "text": "Several messages express opinions about events, products, and services, political views or even their author's emotional state and mood. Sentiment analysis has been used in several applications including analysis of the repercussions of events in social networks, analysis of opinions about products and services, and simply to better understand aspects of social communication in Online Social Networks (OSNs). There are multiple methods for measuring sentiments, including lexical-based approaches and supervised machine learning methods. Despite the wide use and popularity of some methods, it is unclear which method is better for identifying the polarity (i.e., positive or negative) of a message as the current literature does not provide a method of comparison among existing methods. Such a comparison is crucial for understanding the potential limitations, advantages, and disadvantages of popular methods in analyzing the content of OSNs messages. Our study aims at filling this gap by presenting comparisons of eight popular sentiment analysis methods in terms of coverage (i.e., the fraction of messages whose sentiment is identified) and agreement (i.e., the fraction of identified sentiments that are in tune with ground truth). We develop a new method that combines existing approaches, providing the best coverage results and competitive agreement. We also present a free Web service called iFeel, which provides an open API for accessing and comparing results across different sentiment methods for a given text.", "title": "" }, { "docid": "neg:1840262_7", "text": "Data mining approach can be used to discover knowledge by analyzing the patterns or correlations among of fields in large databases. Data mining approach was used to find the patterns of the data from Tanzania Ministry of Water. It is used to predict current and future status of water pumps in Tanzania. The data mining method proposed is XGBoost (eXtreme Gradient Boosting). XGBoost implement the concept of Gradient Tree Boosting which designed to be highly fast, accurate, efficient, flexible, and portable. In addition, Recursive Feature Elimination (RFE) is also proposed to select the important features of the data to obtain an accurate model. The best accuracy achieved with using 27 input factors selected by RFE and XGBoost as a learning model. The achieved result show 80.38% in accuracy. The information or knowledge which is discovered from data mining approach can be used by the government to improve the inspection planning, maintenance, and identify which factor that can cause damage to the water pumps to ensure the availability of potable water in Tanzania. Using data mining approach is cost-effective, less time consuming and faster than manual inspection.", "title": "" }, { "docid": "neg:1840262_8", "text": "Developer support forums are becoming more popular than ever. Crowdsourced knowledge is an essential resource for many developers yet it can raise concerns about the quality of the shared content. Most existing research efforts address the quality of answers posted by Q&A community members. In this paper, we explore the quality of questions and propose a method of predicting the score of questions on Stack Overflow based on sixteen factors related to questions' format, content and interactions that occur in the post. We performed an extensive investigation to understand the relationship between the factors and the scores of questions. The multiple regression analysis shows that the question's length of the code, accepted answer score, number of tags and the count of views, comments and answers are statistically significantly associated with the scores of questions. Our findings can offer insights to community-based Q&A sites for improving the content of the shared knowledge.", "title": "" }, { "docid": "neg:1840262_9", "text": "Automated estimation of the allocation of a driver's visual attention could be a critical component of future advanced driver assistance systems. In theory, vision-based tracking of the eye can provide a good estimate of gaze location. But in practice, eye tracking from video is challenging because of sunglasses, eyeglass reflections, lighting conditions, occlusions, motion blur, and other factors. Estimation of head pose, on the other hand, is robust to many of these effects but can't provide as fine-grained of a resolution in localizing the gaze. For the purpose of keeping the driver safe, it's sufficient to partition gaze into regions. In this effort, a proposed system extracts facial features and classifies their spatial configuration into six regions in real time. The proposed method achieves an average accuracy of 91.4 percent at an average decision rate of 11 Hz on a dataset of 50 drivers from an on-road study.", "title": "" }, { "docid": "neg:1840262_10", "text": "This paper is concerned with the problem of estimating the relative translation and orientation of an inertial measurement unit and a camera, which are rigidly connected. The key is to realize that this problem is in fact an instance of a standard problem within the area of system identification, referred to as a gray-box problem. We propose a new algorithm for estimating the relative translation and orientation, which does not require any additional hardware, except a piece of paper with a checkerboard pattern on it. The method is based on a physical model which can also be used in solving for example sensor fusion problems. The experimental results show that the method works well in practice, both for perspective and spherical cameras.", "title": "" }, { "docid": "neg:1840262_11", "text": "Over the past decade, the advent of new technology has brought about the emergence of smart cities aiming to provide their stakeholders with technology-based solutions that are effective and efficient. Insofar as the objective of smart cities is to improve outcomes that are connected to people, systems and processes of businesses, government and other publicand private-sector entities, its main goal is to improve the quality of life of all residents. Accordingly, smart tourism has emerged over the past few years as a subset of the smart city concept, aiming to provide tourists with solutions that address specific travel related needs. Dubai is an emerging tourism destination that has implemented smart city and smart tourism platforms to engage various stakeholders. The objective of this study is to identify best practices related to Dubai’s smart city and smart tourism. In so doing, Dubai’s mission and vision along with key dimensions and pillars are identified in relation to the advancements in the literature while highlighting key resources and challenges. A Smart Tourism Dynamic Responsive System (STDRS) framework is proposed while suggesting how Dubai may able to enhance users’ involvement and their overall experience.", "title": "" }, { "docid": "neg:1840262_12", "text": "Neural language models (NLMs) have been able to improve machine translation (MT) thanks to their ability to generalize well to long contexts. Despite recent successes of deep neural networks in speech and vision, the general practice in MT is to incorporate NLMs with only one or two hidden layers and there have not been clear results on whether having more layers helps. In this paper, we demonstrate that deep NLMs with three or four layers outperform those with fewer layers in terms of both the perplexity and the translation quality. We combine various techniques to successfully train deep NLMs that jointly condition on both the source and target contexts. When reranking nbest lists of a strong web-forum baseline, our deep models yield an average boost of 0.5 TER / 0.5 BLEU points compared to using a shallow NLM. Additionally, we adapt our models to a new sms-chat domain and obtain a similar gain of 1.0 TER / 0.5 BLEU points.", "title": "" }, { "docid": "neg:1840262_13", "text": "There have been several attempts to create scalable and hardware independent software architectures for Unmanned Aerial Vehicles (UAV). In this work, we propose an onboard architecture for UAVs where hardware abstraction, data storage and communication between modules are efficiently maintained. All processing and software development is done on the UAV while state and mission status of the UAV is monitored from a ground station. The architecture also allows rapid development of mission-specific third party applications on the vehicle with the help of the core module.", "title": "" }, { "docid": "neg:1840262_14", "text": "This research provided the first empirical investigation of how approach and avoidance motives for sacrifice in intimate relationships are associated with personal well-being and relationship quality. In Study 1, the nature of everyday sacrifices made by dating partners was examined, and a measure of approach and avoidance motives for sacrifice was developed. In Study 2, which was a 2-week daily experience study of college students in dating relationships, specific predictions from the theoretical model were tested and both longitudinal and dyadic components were included. Whereas approach motives for sacrifice were positively associated with personal well-being and relationship quality, avoidance motives for sacrifice were negatively associated with personal well-being and relationship quality. Sacrificing for avoidance motives was particularly detrimental to the maintenance of relationships over time. Perceptions of a partner's motives for sacrifice were also associated with well-being and relationship quality. Implications for the conceptualization of relationship maintenance processes along these 2 dimensions are discussed.", "title": "" }, { "docid": "neg:1840262_15", "text": "Automated vehicles are complex systems with a high degree of interdependencies between its components. This complexity sets increasing demands for the underlying software framework. This paper firstly analyzes the requirements for software frameworks. Afterwards an overview on existing software frameworks, that have been used for automated driving projects, is provided with an in-depth introduction into an emerging open-source software framework, the Robot Operating System (ROS). After discussing the main features, advantages and disadvantages of ROS, the communication overhead of ROS is analyzed quantitatively in various configurations showing its applicability for systems with a high data load.", "title": "" }, { "docid": "neg:1840262_16", "text": "This work aims at constructing a semiotic framework for an expanded evolutionary synthesis grounded on Peirce's universal categories and the six space/time/function relations [Taborsky, E., 2004. The nature of the sign as a WFF--a well-formed formula, SEED J. (Semiosis Evol. Energy Dev.) 4 (4), 5-14] that integrate the Lamarckian (internal/external) and Darwinian (individual/population) cuts. According to these guide lines, it is proposed an attempt to formalize developmental systems theory by using the notion of evolving developing agents (EDA) that provides an internalist model of a general transformative tendency driven by organism's need to cope with environmental uncertainty. Development and evolution are conceived as non-programmed open-ended processes of information increase where EDA reach a functional compromise between: (a) increments of phenotype's uniqueness (stability and specificity) and (b) anticipation to environmental changes. Accordingly, changes in mutual information content between the phenotype/environment drag subsequent changes in mutual information content between genotype/phenotype and genotype/environment at two interwoven scales: individual life cycle (ontogeny) and species time (phylogeny), respectively. Developmental terminal additions along with increment minimization of developmental steps must be positively selected.", "title": "" }, { "docid": "neg:1840262_17", "text": "Urban growth is a worldwide phenomenon but the rate of urbanization is very fast in developing country like Egypt. It is mainly driven by unorganized expansion, increased immigration, rapidly increasing population. In this context, land use and land cover change are considered one of the central components in current strategies for managing natural resources and monitoring environmental changes. In Egypt, urban growth has brought serious losses of agricultural land and water bodies. Urban growth is responsible for a variety of urban environmental issues like decreased air quality, increased runoff and subsequent flooding, increased local temperature, deterioration of water quality, etc. Egypt possessed a number of fast growing cities. Mansoura and Talkha cities in Daqahlia governorate are expanding rapidly with varying growth rates and patterns. In this context, geospatial technologies and remote sensing methodology provide essential tools which can be applied in the analysis of land use change detection. This paper is an attempt to assess the land use change detection by using GIS in Mansoura and Talkha from 1985 to 2010. Change detection analysis shows that built-up area has been increased from 28 to 255 km by more than 30% and agricultural land reduced by 33%. Future prediction is done by using the Markov chain analysis. Information on urban growth, land use and land cover change study is very useful to local government and urban planners for the betterment of future plans of sustainable development of the city. 2015 The Gulf Organisation for Research and Development. Production and hosting by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840262_18", "text": "In order to establish low-cost and strongly-immersive desktop virtual experiment system, a solution based on Kinect and Unity3D engine technology was herein proposed, with a view to applying Kinect gesture recognition and triggering more spontaneous human-computer interactions in three-dimensional virtual environment. A kind of algorithm tailored to the detection of concave-convex points of fingers is put forward to identify various gestures and interaction semantics. In the context of Unity3D, Finite-State Machine (FSM) programming was applied in intelligent management for experimental logic tasks. A “Virtual Experiment System for Electrician Training” was designed and put into practice by these methods. The applications of “Lighting Circuit” module prove that these methods can be satisfyingly helpful to complete virtual experimental tasks and improve user experience. Compared with traditional WIMP interaction, Kinect somatosensory interaction is combined with Unity3D so that three-dimensional virtual system with strong immersion can be established.", "title": "" } ]
1840263
Mash: fast genome and metagenome distance estimation using MinHash
[ { "docid": "pos:1840263_0", "text": "The mathematical concept of document resemblance captures well the informal notion of syntactic similarity. The resemblance can be estimated using a fixed size “sketch” for each document. For a large collection of documents (say hundreds of millions) the size of this sketch is of the order of a few hundred bytes per document. However, for efficient large scale web indexing it is not necessary to determine the actual resemblance value: it suffices to determine whether newly encountered documents are duplicates or near-duplicates of documents already indexed. In other words, it suffices to determine whether the resemblance is above a certain threshold. In this talk we show how this determination can be made using a ”sample” of less than 50 bytes per document. The basic approach for computing resemblance has two aspects: first, resemblance is expressed as a set (of strings) intersection problem, and second, the relative size of intersections is evaluated by a process of random sampling that can be done independently for each document. The process of estimating the relative size of intersection of sets and the threshold test discussed above can be applied to arbitrary sets, and thus might be of independent interest. The algorithm for filtering near-duplicate documents discussed here has been successfully implemented and has been used for the last three years in the context of the AltaVista search engine.", "title": "" }, { "docid": "pos:1840263_1", "text": "Comparative genomic analyses of primates offer considerable potential to define and understand the processes that mold, shape, and transform the human genome. However, primate taxonomy is both complex and controversial, with marginal unifying consensus of the evolutionary hierarchy of extant primate species. Here we provide new genomic sequence (~8 Mb) from 186 primates representing 61 (~90%) of the described genera, and we include outgroup species from Dermoptera, Scandentia, and Lagomorpha. The resultant phylogeny is exceptionally robust and illuminates events in primate evolution from ancient to recent, clarifying numerous taxonomic controversies and providing new data on human evolution. Ongoing speciation, reticulate evolution, ancient relic lineages, unequal rates of evolution, and disparate distributions of insertions/deletions among the reconstructed primate lineages are uncovered. Our resolution of the primate phylogeny provides an essential evolutionary framework with far-reaching applications including: human selection and adaptation, global emergence of zoonotic diseases, mammalian comparative genomics, primate taxonomy, and conservation of endangered species.", "title": "" }, { "docid": "pos:1840263_2", "text": "Kraken is an ultrafast and highly accurate program for assigning taxonomic labels to metagenomic DNA sequences. Previous programs designed for this task have been relatively slow and computationally expensive, forcing researchers to use faster abundance estimation programs, which only classify small subsets of metagenomic data. Using exact alignment of k-mers, Kraken achieves classification accuracy comparable to the fastest BLAST program. In its fastest mode, Kraken classifies 100 base pair reads at a rate of over 4.1 million reads per minute, 909 times faster than Megablast and 11 times faster than the abundance estimation program MetaPhlAn. Kraken is available at http://ccb.jhu.edu/software/kraken/ .", "title": "" } ]
[ { "docid": "neg:1840263_0", "text": "This survey discusses how recent developments in multimodal processing facilitate conceptual grounding of language. We categorize the information flow in multimodal processing with respect to cognitive models of human information processing and analyze different methods for combining multimodal representations. Based on this methodological inventory, we discuss the benefit of multimodal grounding for a variety of language processing tasks and the challenges that arise. We particularly focus on multimodal grounding of verbs which play a crucial role for the compositional power of language. Title and Abstract in German Multimodale konzeptuelle Verankerung für die automatische Sprachverarbeitung Dieser Überblick erörtert, wie aktuelle Entwicklungen in der automatischen Verarbeitung multimodaler Inhalte die konzeptuelle Verankerung sprachlicher Inhalte erleichtern können. Die automatischen Methoden zur Verarbeitung multimodaler Inhalte werden zunächst hinsichtlich der zugrundeliegenden kognitiven Modelle menschlicher Informationsverarbeitung kategorisiert. Daraus ergeben sich verschiedene Methoden um Repräsentationen unterschiedlicher Modalitäten miteinander zu kombinieren. Ausgehend von diesen methodischen Grundlagen wird diskutiert, wie verschiedene Forschungsprobleme in der automatischen Sprachverarbeitung von multimodaler Verankerung profitieren können und welche Herausforderungen sich dabei ergeben. Ein besonderer Schwerpunkt wird dabei auf die multimodale konzeptuelle Verankerung von Verben gelegt, da diese eine wichtige kompositorische Funktion erfüllen.", "title": "" }, { "docid": "neg:1840263_1", "text": "This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio’s dual coding theory, Baddeley’s working memory model, Engelkamp’s multimodal theory, Sweller’s cognitive load theory, Mayer’s multimedia learning theory, and Nathan’s ANIMATE theory. The discussion emphasizes the interplay between traditional research studies and instructional applications of this research for increasing recall, reducing interference, minimizing cognitive load, and enhancing understanding. Tentative conclusions are that (a) there is general agreement among the different architectures, which differ in focus; (b) learners’ integration of multiple codes is underspecified in the models; (c) animated instruction is not required when mental simulations are sufficient; (d) actions must be meaningful to be successful; and (e) multimodal instruction is superior to targeting modality-specific individual differences.", "title": "" }, { "docid": "neg:1840263_2", "text": "We describe a graphical representation of probabilistic relationships-an alternative to the Bayesian network-called a dependency network. Like a Bayesian network, a dependency network has a graph and a probability component. The graph component is a (cyclic) directed graph such that a node's parents render that node independent of all other nodes in the network. The probability component consists of the probability of a node given its parents for each node (as in a Bayesian network). We identify several basic properties of this representation, and describe its use in collaborative filtering (the task of predicting preferences) and the visualization of predictive relationships.", "title": "" }, { "docid": "neg:1840263_3", "text": "We present a novel framework that enables efficient probabilistic inference in large-scale scientific models by allowing the execution of existing domain-specific simulators as probabilistic programs, resulting in highly interpretable posterior inference. Our framework is general purpose and scalable, and is based on a crossplatform probabilistic execution protocol through which an inference engine can control simulators in a language-agnostic way. We demonstrate the technique in particle physics, on a scientifically accurate simulation of the τ (tau) lepton decay, which is a key ingredient in establishing the properties of the Higgs boson. Highenergy physics has a rich set of simulators based on quantum field theory and the interaction of particles in matter. We show how to use probabilistic programming to perform Bayesian inference in these existing simulator codebases directly, in particular conditioning on observable outputs from a simulated particle detector to directly produce an interpretable posterior distribution over decay pathways. Inference efficiency is achieved via inference compilation where a deep recurrent neural network is trained to parameterize proposal distributions and control the stochastic simulator in a sequential importance sampling scheme, at a fraction of the computational cost of Markov chain Monte Carlo sampling.", "title": "" }, { "docid": "neg:1840263_4", "text": "As one of the most important mid-level features of music, chord contains rich information of harmonic structure that is useful for music information retrieval. In this paper, we present a chord recognition system based on the N-gram model. The system is time-efficient, and its accuracy is comparable to existing systems. We further propose a new method to construct chord features for music emotion classification and evaluate its performance on commercial song recordings. Experimental results demonstrate the advantage of using chord features for music classification and retrieval.", "title": "" }, { "docid": "neg:1840263_5", "text": "We define the object detection from imagery problem as estimating a very large but extremely sparse bounding box dependent probability distribution. Subsequently we identify a sparse distribution estimation scheme, Directed Sparse Sampling, and employ it in a single end-to-end CNN based detection model. This methodology extends and formalizes previous state-of-the-art detection models with an additional emphasis on high evaluation rates and reduced manual engineering. We introduce two novelties, a corner based region-of-interest estimator and a deconvolution based CNN model. The resulting model is scene adaptive, does not require manually defined reference bounding boxes and produces highly competitive results on MSCOCO, Pascal VOC 2007 and Pascal VOC 2012 with real-time evaluation rates. Further analysis suggests our model performs particularly well when finegrained object localization is desirable. We argue that this advantage stems from the significantly larger set of available regions-of-interest relative to other methods. Source-code is available from: https://github.com/lachlants/denet", "title": "" }, { "docid": "neg:1840263_6", "text": "We describe a Digital Advertising System Simulation (DASS) for modeling advertising and its impact on user behavior. DASS is both flexible and general, and can be applied to research on a wide range of topics, such as digital attribution, ad fatigue, campaign optimization, and marketing mix modeling. This paper introduces the basic DASS simulation framework and illustrates its application to digital attribution. We show that common position-based attribution models fail to capture the true causal effects of advertising across several simple scenarios. These results lay a groundwork for the evaluation of more complex attribution models, and the development of improved models.", "title": "" }, { "docid": "neg:1840263_7", "text": "This paper describes two Global Positioning System (GPS) based attitude determination algorithms which contain steps of integer ambiguity resolution and attitude computation. The first algorithm extends the ambiguity function method to account for the unique requirement of attitude determination. The second algorithm explores the artificial neural network approach to find the attitude. A test platform is set up for verifying these algorithms.", "title": "" }, { "docid": "neg:1840263_8", "text": "The article introduces a framework for users' design quality judgments based on Adaptive Decision Making theory. The framework describes judgment on quality attributes (usability, content/functionality, aesthetics, customisation and engagement) with dependencies on decision making arising from the user's background, task and context. The framework is tested and refined by three experimental studies. The first two assessed judgment of quality attributes of websites with similar content but radically different designs for aesthetics and engagement. Halo effects were demonstrated whereby attribution of good quality on one attribute positively influenced judgment on another, even in the face of objective evidence to the contrary (e.g., usability errors). Users' judgment was also shown to be susceptible to framing effects of the task and their background. These appear to change the importance order of the quality attributes; hence, quality assessment of a design appears to be very context dependent. The third study assessed the influence of customisation by experiments on mobile services applications, and demonstrated that evaluation of customisation depends on the users' needs and motivation. The results are discussed in the context of the literature on aesthetic judgment, user experience and trade-offs between usability and hedonic/ludic design qualities.", "title": "" }, { "docid": "neg:1840263_9", "text": "It is shown how Conceptual Graphs and Formal Concept Analysis may be combined to obtain a formalization of Elementary Logic which is useful for knowledge representation and processing. For this, a translation of conceptual graphs to formal contexts and concept lattices is described through an example. Using a suitable mathematization of conceptual graphs, basics of a uniied mathematical theory for Elementary Logic are proposed.", "title": "" }, { "docid": "neg:1840263_10", "text": "what design is from a theoretical point of view, which is a role of the descriptive model. However, descriptive models are not necessarily helpful in directly deriving either the architecture of intelligent CAD or the knowledge representation for intelligent CAD. For this purpose, we need a computable design process model that should coincide, at least to some extent, with a cognitive model that explains actual design activities. One of the major problems in developing so-called intelligent computer-aided design (CAD) systems (ten Hagen and Tomiyama 1987) is the representation of design knowledge, which is a two-part process: the representation of design objects and the representation of design processes. We believe that intelligent CAD systems will be fully realized only when these two types of representation are integrated. Progress has been made in the representation of design objects, as can be seen, for example, in geometric modeling; however, almost no significant results have been seen in the representation of design processes, which implies that we need a design theory to formalize them. According to Finger and Dixon (1989), design process models can be categorized into a descriptive model that explains how design is done, a cognitive model that explains the designer’s behavior, a prescriptive model that shows how design must be done, and a computable model that expresses a method by which a computer can accomplish a task. A design theory for intelligent CAD is not useful when it is merely descriptive or cognitive; it must also be computable. We need a general model of design Articles", "title": "" }, { "docid": "neg:1840263_11", "text": "Histone modifications and chromatin-associated protein complexes are crucially involved in the control of gene expression, supervising cell fate decisions and differentiation. Many promoters in embryonic stem (ES) cells harbor a distinctive histone modification signature that combines the activating histone H3 Lys 4 trimethylation (H3K4me3) mark and the repressive H3K27me3 mark. These bivalent domains are considered to poise expression of developmental genes, allowing timely activation while maintaining repression in the absence of differentiation signals. Recent advances shed light on the establishment and function of bivalent domains; however, their role in development remains controversial, not least because suitable genetic models to probe their function in developing organisms are missing. Here, we explore avenues to and from bivalency and propose that bivalent domains and associated chromatin-modifying complexes safeguard proper and robust differentiation.", "title": "" }, { "docid": "neg:1840263_12", "text": "Integrons can insert and excise antibiotic resistance genes on plasmids in bacteria by site-specific recombination. Class 1 integrons code for an integrase, IntI1 (337 amino acids in length), and are generally borne on elements derived from Tn5090, such as that found in the central part of Tn21. A second class of integron is found on transposon Tn7 and its relatives. We have completed the sequence of the Tn7 integrase gene, intI2, which contains an internal stop codon. This codon was found to be conserved among intI2 genes on three other Tn7-like transposons harboring different cassettes. The predicted peptide sequence (IntI2*) is 325 amino acids long and is 46% identical to IntI1. In order to detect recombination activity, the internal stop codon at position 179 in the parental allele was changed to a triplet coding for glutamic acid. The sequences flanking the cassette arrays in the class 1 and 2 integrons are not closely related, but a common pool of mobile cassettes is used by the different integron classes; two of the three antibiotic resistance cassettes on Tn7 and its close relatives are also found in various class 1 integrons. We also observed a fourth excisable cassette downstream of those described previously in Tn7. The fourth cassette encodes a 165-amino-acid protein of unknown function with 6.5 contiguous repeats of a sequence coding for 7 amino acids. IntI2*179E promoted site-specific excision of each of the cassettes in Tn7 at different frequencies. The integrases from Tn21 and Tn7 showed limited cross-specificity in that IntI1 could excise all cassettes from both Tn21 and Tn7. However, we did not observe a corresponding excision of the aadA1 cassette from Tn21 by IntI2*179E.", "title": "" }, { "docid": "neg:1840263_13", "text": "We examine the implications of shape on the process of finding dense correspondence and half-occlusions for a stereo pair of images. The desired property of the disparity map is that it should be a piecewise continuous function which is consistent with the images and which has the minimum number of discontinuities. To zeroth order, piecewise continuity becomes piecewise constancy. Using this approximation, we first discuss an approach for dealing with such a fronto-parallel shapeless world, and the problems involved therein. We then introduce horizontal and vertical slant to create a first order approximation to piecewise continuity. In particular, we emphasize the following geometric fact: a horizontally slanted surface (i.e., having depth variation in the direction of the separation of the two cameras) will appear horizontally stretched in one image as compared to the other image. Thus, while corresponding two images, N pixels on a scanline in one image may correspond to a different number of pixels M in the other image. This leads to three important modifications to existing stereo algorithms: (a) due to unequal sampling, existing intensity matching metrics must be modified, (b) unequal numbers of pixels in the two images must be allowed to correspond to each other, and (c) the uniqueness constraint, which is often used for detecting occlusions, must be changed to an interval uniqueness constraint. We also discuss the asymmetry between vertical and horizontal slant, and the central role of non-horizontal edges in the context of vertical slant. Using experiments, we discuss cases where existing algorithms fail, and how the incorporation of these new constraints provides correct results.", "title": "" }, { "docid": "neg:1840263_14", "text": "Author Co-citation Analysis (ACA) has long been used as an effective method for identifying the intellectual structure of a research domain, but it relies on simple co-citation counting, which does not take the citation content into consideration. The present study proposes a new method for measuring the similarity between co-cited authors by considering author's citation content. We collected the full-text journal articles in the information science domain and extracted the citing sentences to calculate their similarity distances. We compared our method with traditional ACA and found out that our approach, while displaying a similar intellectual structure for the information science domain as the other baseline methods, also provides more details about the sub-disciplines in the domain than with traditional ACA.", "title": "" }, { "docid": "neg:1840263_15", "text": "Glucose-responsive delivery of insulin mimicking the function of pancreatic β-cells to achieve meticulous control of blood glucose (BG) would revolutionize diabetes care. Here the authors report the development of a new glucose-responsive insulin delivery system based on the potential interaction between the glucose derivative-modified insulin (Glc-Insulin) and glucose transporters on erythrocytes (or red blood cells, RBCs) membrane. After being conjugated with the glucosamine, insulin can efficiently bind to RBC membranes. The binding is reversible in the setting of hyperglycemia, resulting in fast release of insulin and subsequent drop of BG level in vivo. The delivery vehicle can be further simplified utilizing injectable polymeric nanocarriers coated with RBC membrane and loaded with Glc-Insulin. The described work is the first demonstration of utilizing RBC membrane to achieve smart insulin delivery with fast responsiveness.", "title": "" }, { "docid": "neg:1840263_16", "text": "The security of embedded devices often relies on the secrecy of proprietary cryptographic algorithms. These algorithms and their weaknesses are frequently disclosed through reverse-engineering software, but it is commonly thought to be too expensive to reconstruct designs from a hardware implementation alone. This paper challenges that belief by presenting an approach to reverse-engineering a cipher from a silicon implementation. Using this mostly automated approach, we reveal a cipher from an RFID tag that is not known to have a software or micro-code implementation. We reconstruct the cipher from the widely used Mifare Classic RFID tag by using a combination of image analysis of circuits and protocol analysis. Our analysis reveals that the security of the tag is even below the level that its 48-bit key length suggests due to a number of design flaws. Weak random numbers and a weakness in the authentication protocol allow for pre-computed rainbow tables to be used to find any key in a matter of seconds. Our approach of deducing functionality from circuit images is mostly automated, hence it is also feasible for large chips. The assumption that algorithms can be kept secret should therefore to be avoided for any type of silicon chip. Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi. ([A cipher] must not depend on secrecy, and it must not matter if it falls into enemy hands.) August Kerckhoffs, La Cryptographie Militaire, January 1883 [13]", "title": "" }, { "docid": "neg:1840263_17", "text": "A composite cavity-backed folded sectorial bowtie antenna (FSBA) is proposed and investigated in this paper, which is differentially fed by an SMA connector through a balun, i.e. a transition from a microstrip line to a parallel stripline. The composite cavity as a general case, consisting of a conical part and a cylindrical rim, can be tuned freely from a cylindrical to a cup-shaped one. Parametric studies are performed to optimize the antenna performance. Experimental results reveal that it can achieve an impedance bandwidth of 143% for SWR les 2, a broadside gain of 8-15.3 dBi, and stable radiation pattern over the whole operating band. The total electrical dimensions are 0.66lambdam in diameter and 0.16lambdam in height, where lambdam is the free-space wavelength at lower edge of the operating frequency band. The problem about the distorted patterns in the upper frequency band for wideband cavity-backed antennas is solved in our work.", "title": "" }, { "docid": "neg:1840263_18", "text": "Intentional frequency perturbation by recently researched active islanding detection techniques for inverter based distributed generation (DG) define new threshold settings for the frequency relays. This innovation has enabled the modern frequency relays to operate inside the non-detection zone (NDZ) of the conventional frequency relays. However, the effect of such perturbation on the performance of the rate of change of frequency (ROCOF) relays has not been researched so far. This paper evaluates the performance of ROCOF relays under such perturbations for an inverter interfaced DG and proposes an algorithm along with the new threshold settings to enable it work under the NDZ. The proposed algorithm is able to differentiate between an islanding and a non-islanding event. The operating principle of relay is based on low frequency current injection through grid side voltage source converter (VSC) control of doubly fed induction generator (DFIG) and therefore, the relay is defined as “active ROCOF relay”. Simulations are done in MATLAB.", "title": "" }, { "docid": "neg:1840263_19", "text": "Fraudulent activities (e.g., suspicious credit card transaction, financial reporting fraud, and money laundering) are critical concerns to various entities including bank, insurance companies, and public service organizations. Typically, these activities lead to detrimental effects on the victims such as a financial loss. Over the years, fraud analysis techniques underwent a rigorous development. However, lately, the advent of Big data led to vigorous advancement of these techniques since Big Data resulted in extensive opportunities to combat financial frauds. Given that the massive amount of data that investigators need to sift through, massive volumes of data integrated from multiple heterogeneous sources (e.g., social media, blogs) to find fraudulent patterns is emerging as a feasible approach.", "title": "" } ]
1840264
A belief-desire-intention model for blog users' negative emotional norm compliance: Decision-making in crises
[ { "docid": "pos:1840264_0", "text": "Social Sharing of Emotion (SSE) occurs when one person shares an emotional experience with another and is considered potentially beneficial. Though social sharing has been shown prevalent in interpersonal communication, research on its occurrence and communication structure in online social networks is lacking. Based on a content analysis of blog posts (n = 540) in a blog social network site (Live Journal), we assess the occurrence of social sharing in blog posts, characterize different types of online SSE, and present a theoretical model of online SSE. A large proportion of initiation expressions were found to conform to full SSE, with negative emotion posts outnumbering bivalent and positive posts. Full emotional SSE posts were found to prevail, compared to partial feelings or situation posts. Furthermore, affective feedback predominated to cognitive and provided emotional support, empathy and admiration. The study found evidence that the process of social sharing occurs in Live Journal, replicating some features of face to face SSE. Instead of a superficial view of online social sharing, our results support a prosocial and beneficial character to online SSE. 2015 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "neg:1840264_0", "text": "Project portfolio management in relation to innovation has increasingly gained the attention of practitioners and academics during the last decade. While significant progress has been made in the pursuit of a process approach to achieve an effective project portfolio management, limited attention has been paid to the issue of how to integrate sustainability into innovation portfolio management decision making. The literature is lacking insights on how to manage the innovation project portfolio throughout the strategic analysis phase to the monitoring of the portfolio performance in relation to sustainability during the development phase of projects. This paper presents a 5step framework for integrating sustainability in the innovation project portfolio management process in the field of product development. The framework can be applied for the management of a portfolio of three project categories that involve breakthrough projects, platform projects and derivative projects. It is based on the assessment of various methods of project evaluation and selection, and a case analysis in the automotive industry. It enables the integration of the three dimensions of sustainability into the innovation project portfolio management process within firms. The three dimensions of sustainability involve ecological sustainability, social sustainability and economic sustainability. Another benefit is enhancing the ability of firms to achieve an effective balance of investment between the three dimensions of sustainability, taking the competitive approach of a firm toward the marketplace into account. 2014 Published by Elsevier B.V. * Corresponding author. Tel.: +31 6 12990878. E-mail addresses: jacques.brook@ig-nl.com (J.W. Brook), fabrizio.pagnanelli@vodafone.it (F. Pagnanelli). G Models ENGTEC-1407; No. of Pages 17 Please cite this article in press as: Brook, J.W., Pagnanelli, F., Integrating sustainability into innovation project portfolio management – A strategic perspective. J. Eng. Technol. Manage. (2014), http://dx.doi.org/10.1016/j.jengtecman.2013.11.004", "title": "" }, { "docid": "neg:1840264_1", "text": "We present a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set. In contrast to existing segmentation procedures that only label a small number of tissue classes, the current method assigns one of 37 labels to each voxel, including left and right caudate, putamen, pallidum, thalamus, lateral ventricles, hippocampus, and amygdala. The classification technique employs a registration procedure that is robust to anatomical variability, including the ventricular enlargement typically associated with neurological diseases and aging. The technique is shown to be comparable in accuracy to manual labeling, and of sufficient sensitivity to robustly detect changes in the volume of noncortical structures that presage the onset of probable Alzheimer's disease.", "title": "" }, { "docid": "neg:1840264_2", "text": "1.1 UWB antennas in the field of high pulsed power For the last few years, the generation of high-power electromagnetic waves has been one of the major applications of high pulsed power (HPP). It has aroused great interest in the scientific community since it is at the origin of several technological advances. Several kinds of high power radiation sources have been created. There currently appears to be a strong inclination towards compact and autonomous sources of high power microwaves (HPM) (Cadilhon et al., 2010; Pécastaing et al., 2009). The systems discussed here always consist of an electrical high pulsed power generator combined with an antenna. The HPP generator consists of a primary energy source, a power-amplification system and a pulse forming stage. It sends the energy to a suitable antenna. When this radiating element has good electromagnetic characteristics over a wide band of frequency and high dielectric strength, it is possible to generate high power electromagnetic waves in the form of pulses. The frequency band of the wave that is radiated can cover a very broad spectrum of over one decade in frequency. In this case, the technique is of undoubted interest for a wide variety of civil and military applications. Such applications can include, for example, ultra-wideband (UWB) pulse radars to detect buried mines or to rescue buried people, the production of nuclear electromagnetic pulse (NEMP) simulators for electromagnetic compatibility and vulnerability tests on electronic and IT equipment, and UWB communications systems and electromagnetic jamming, the principle of which consists of focusing high-power electromagnetic waves on an identified target to compromise the target’s mission by disrupting or destroying its electronic components. Over the years, the evolution of the R&D program for the development of HPM sources has evidenced the technological difficulties intrinsic to each elementary unit and to each of the physical parameters considered. Depending on the wave form chosen, there is in fact a very wide range of possibilities for the generation of microwave power. The only real question is", "title": "" }, { "docid": "neg:1840264_3", "text": "Parking is costly and limited in almost every major city in the world. Innovative parking systems for meeting near-term parking demand are needed. This paper proposes a novel, secure, and intelligent parking system (SmartParking) based on secured wireless network and sensor communication. From the point of users' view, SmartParking is a secure and intelligent parking service. The parking reservation is safe and privacy preserved. The parking navigation is convenient and efficient. The whole parking process will be a non-stop service. From the point of management's view, SmartParking is an intelligent parking system. The parking process can be modeled as birth-death stochastic process and the prediction of revenues can be made. Based on the prediction, new business promotion can be made, for example, on-sale prices and new parking fees. In SmartParking, new promotions can be published through wireless network. We address hardware/software architecture, implementations, and analytical models and results. The evaluation of this proposed system proves its efficiency.", "title": "" }, { "docid": "neg:1840264_4", "text": "STUDY DESIGN\nPragmatic, multicentered randomized controlled trial, with 12-month follow-up.\n\n\nOBJECTIVE\nTo evaluate the effect of adding specific spinal stabilization exercises to conventional physiotherapy for patients with recurrent low back pain (LBP) in the United Kingdom.\n\n\nSUMMARY OF BACKGROUND DATA\nSpinal stabilization exercises are a popular form of physiotherapy management for LBP, and previous small-scale studies on specific LBP subgroups have identified improvement in outcomes as a result.\n\n\nMETHODS\nA total of 97 patients (18-60 years old) with recurrent LBP were recruited. Stratified randomization was undertaken into 2 groups: \"conventional,\" physiotherapy consisting of general active exercise and manual therapy; and conventional physiotherapy plus specific spinal stabilization exercises. Stratifying variables used were laterality of symptoms, duration of symptoms, and Roland Morris Disability Questionnaire score at baseline. Both groups received The Back Book, by Roland et al. Back-specific functional disability (Roland Morris Disability Questionnaire) at 12 months was the primary outcome. Pain, quality of life, and psychologic measures were also collected at 6 and 12 months. Analysis was by intention to treat.\n\n\nRESULTS\nA total of 68 patients (70%) provided 12-month follow-up data. Both groups showed improved physical functioning, reduced pain intensity, and an improvement in the physical component of quality of life. Mean change in physical functioning, measured by the Roland Morris Disability Questionnaire, was -5.1 (95% confidence interval -6.3 to -3.9) for the specific spinal stabilization exercises group and -5.4 (95% confidence interval -6.5 to -4.2) for the conventional physiotherapy group. No statistically significant differences between the 2 groups were shown for any of the outcomes measured, at any time.\n\n\nCONCLUSIONS\nPatients with LBP had improvement with both treatment packages to a similar degree. There was no additional benefit of adding specific spinal stabilization exercises to a conventional physiotherapy package for patients with recurrent LBP.", "title": "" }, { "docid": "neg:1840264_5", "text": "Collaborative writing is on the increase. In order to write well together, authors often need to be aware of who has done what recently. We offer a new tool, DocuViz, that displays the entire revision history of Google Docs, showing more than the one-step-at-a-time view now shown in revision history and tracking changes in Word. We introduce the tool and present cases in which the tool has the potential to be useful: To authors themselves to see recent \"seismic activity,\" indicating where in particular a co-author might want to pay attention, to instructors to see who has contributed what and which changes were made to comments from them, and to researchers interested in the new patterns of collaboration made possible by simultaneous editing capabilities.", "title": "" }, { "docid": "neg:1840264_6", "text": "The success of software development depends on the proper estimation of the effort required to develop the software. Project managers require a reliable approach for software effort estimation. It is especially important during the early stages of the software development life cycle. Accurate software effort estimation is a major concern in software industries. Stochastic Gradient Boosting (SGB) is one of the machine learning techniques that helps in getting improved estimated values. SGB is used for improving the accuracy of models built on decision trees. In this paper, the main goal is to estimate the effort required to develop various software projects using the class point approach. Then, optimization of the effort parameters is achieved using the SGB technique to obtain better prediction accuracy. Further- more, performance comparisons of the models obtained using the SGB technique with the Multi Layer Perceptron and the Radial Basis Function Network are presented in order to highlight the performance achieved by each method.", "title": "" }, { "docid": "neg:1840264_7", "text": "The Acropolis of Athens is one of the most prestigious ancient monuments in the world, attracting daily many visitors, and therefore its structural integrity is of paramount importance. During the last decade an accelerographic array has been installed at the Archaeological Site, in order to monitor the seismic response of the Acropolis Hill and the dynamic behaviour of the monuments (including the Circuit Wall), while several optical fibre sensors have been attached at a middle-vertical section of the Wall. In this study, indicative real time recordings of strain and acceleration on the Wall and the Hill with the use of optical fibre sensors and accelerographs, respectively, are presented and discussed. The records aim to investigate the static and dynamic behaviour – distress of the Wall and the Acropolis Hill, taking also into account the prevailing geological conditions. The optical fibre technology, the location of the sensors, as well as the installation methodology applied is also presented. Emphasis is given to the application of real time instrumental monitoring which can be used as a valuable tool to predict potential structural", "title": "" }, { "docid": "neg:1840264_8", "text": "Recurrent neural networks (RNNs) are typically considered as relatively simple architectures, which come along with complicated learning algorithms. This paper has a different view: We start from the fact that RNNs can model any high dimensional, nonlinear dynamical system. Rather than focusing on learning algorithms, we concentrate on the design of network architectures. Unfolding in time is a well-known example of this modeling philosophy. Here a temporal algorithm is transferred into an architectural framework such that the learning can be performed by an extension of standard error backpropagation. We introduce 12 tricks that not only provide deeper insights in the functioning of RNNs but also improve the identification of underlying dynamical system from data.", "title": "" }, { "docid": "neg:1840264_9", "text": "This paper proposes a novel tracker which is controlled by sequentially pursuing actions learned by deep reinforcement learning. In contrast to the existing trackers using deep networks, the proposed tracker is designed to achieve a light computation as well as satisfactory tracking accuracy in both location and scale. The deep network to control actions is pre-trained using various training sequences and fine-tuned during tracking for online adaptation to target and background changes. The pre-training is done by utilizing deep reinforcement learning as well as supervised learning. The use of reinforcement learning enables even partially labeled data to be successfully utilized for semi-supervised learning. Through evaluation of the OTB dataset, the proposed tracker is validated to achieve a competitive performance that is three times faster than state-of-the-art, deep network&#x2013;based trackers. The fast version of the proposed method, which operates in real-time on GPU, outperforms the state-of-the-art real-time trackers.", "title": "" }, { "docid": "neg:1840264_10", "text": "Problem Description: It should be well known that processors are outstripping memory performance: specifically that memory latencies are not improving as fast as processor cycle time or IPC or memory bandwidth. Thought experiment: imagine that a cache miss takes 10000 cycles to execute. For such a processor instruction level parallelism is useless, because most of the time is spent waiting for memory. Branch prediction is also less effective, since most branches can be determined with data already in registers or in the cache; branch prediction only helps for branches which depend on outstanding cache misses. At the same time, pressures for reduced power consumption mount. Given such trends, some computer architects in industry (although not Intel EPIC) are talking seriously about retreating from out-of-order superscalar processor architecture, and instead building simpler, faster, dumber, 1-wide in-order processors with high degrees of speculation. Sometimes this is proposed in combination with multiprocessing and multithreading: tolerate long memory latencies by switching to other processes or threads. I propose something different: build narrow fast machines but use intelligent logic inside the CPU to increase the number of outstanding cache misses that can be generated from a single program. By MLP I mean simply the number of outstanding cache misses that can be generated (by a single thread, task, or program) and executed in an overlapped manner. It does not matter what sort of execution engine generates the multiple outstanding cache misses. An out-of-order superscalar ILP CPU may generate multiple outstanding cache misses, but 1-wide processors can be just as effective. Change the metrics: total execution time remains the overall goal, but instead of reporting IPC as an approximation to this, we must report MLP. Limit studies should be in terms of total number of non-overlapped cache misses on critical path. Now do the research: Many present-day hot topics in computer architecture help ILP, but do not help MLP. As mentioned above, predicting branch directions for branches that can be determined from data already in the cache or in registers does not help MLP for extremely long latencies. Similarly, prefetching of data cache misses for array processing codes does not help MLP – it just moves it around. Instead, investigate microarchitectures that help MLP: (0) Trivial case – explicit multithreading, like SMT. (1) Slightly less trivial case – implicitly multithread single programs, either by compiler software on an MT machine, or by a hybrid, such as …", "title": "" }, { "docid": "neg:1840264_11", "text": "For the evaluation of grasp quality, different measures have been proposed that are based on wrench spaces. Almost all of them have drawbacks that derive from the non-uniformity of the wrench space, composed of force and torque dimensions. Moreover, many of these approaches are computationally expensive. We address the problem of choosing a proper task wrench space to overcome the problems of the non-uniform wrench space and show how to integrate it in a well-known, high precision and extremely fast computable grasp quality measure.", "title": "" }, { "docid": "neg:1840264_12", "text": "We introduce an approach to biasing language models towards known contexts without requiring separate language models or explicit contextually-dependent conditioning contexts. We do so by presenting an alternative ASR objective, where we predict the acoustics and words given the contextual cue, such as the geographic location of the speaker. A simple factoring of the model results in an additional biasing term, which effectively indicates how correlated a hypothesis is with the contextual cue (e.g., given the hypothesized transcript, how likely is the user’s known location). We demonstrate that this factorization allows us to train relatively small contextual models which are effective in speech recognition. An experimental analysis shows a perplexity reduction of up to 35% and a relative reduction in word error rate of 1.6% on a targeted voice search dataset when using the user’s coarse location as a contextual cue.", "title": "" }, { "docid": "neg:1840264_13", "text": "Real time control of five-axis machine tools requires smooth generation of feed, acceleration and jerk in CNC systems without violating the physical limits of the drives. This paper presents a feed scheduling algorithm for CNC systems to minimize the machining time for five-axis contour machining of sculptured surfaces. The variation of the feed along the five-axis tool-path is expressed in a cubic B-spline form. The velocity, acceleration and jerk limits of the five axes are considered in finding the most optimal feed along the toolpath in order to ensure smooth and linear operation of the servo drives with minimal tracking error. The time optimal feed motion is obtained by iteratively modulating the feed control points of the B-spline to maximize the feed along the tool-path without violating the programmed feed and the drives’ physical limits. Long tool-paths are handled efficiently by applying a moving window technique. The improvement in the productivity and linear operation of the five drives is demonstrated with five-axis simulations and experiments on a CNC machine tool. r 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840264_14", "text": "Compressed sensing (CS) utilizes the sparsity of magnetic resonance (MR) images to enable accurate reconstruction from undersampled k-space data. Recent CS methods have employed analytical sparsifying transforms such as wavelets, curvelets, and finite differences. In this paper, we propose a novel framework for adaptively learning the sparsifying transform (dictionary), and reconstructing the image simultaneously from highly undersampled k-space data. The sparsity in this framework is enforced on overlapping image patches emphasizing local structure. Moreover, the dictionary is adapted to the particular image instance thereby favoring better sparsities and consequently much higher undersampling rates. The proposed alternating reconstruction algorithm learns the sparsifying dictionary, and uses it to remove aliasing and noise in one step, and subsequently restores and fills-in the k-space data in the other step. Numerical experiments are conducted on MR images and on real MR data of several anatomies with a variety of sampling schemes. The results demonstrate dramatic improvements on the order of 4-18 dB in reconstruction error and doubling of the acceptable undersampling factor using the proposed adaptive dictionary as compared to previous CS methods. These improvements persist over a wide range of practical data signal-to-noise ratios, without any parameter tuning.", "title": "" }, { "docid": "neg:1840264_15", "text": "In traffic environment, conventional FMCW radar with triangular transmit waveform may bring out many false targets in multi-target situations and result in a high false alarm rate. An improved FMCW waveform and multi-target detection algorithm for vehicular applications is presented. The designed waveform in each small cycle is composed of two-segment: LFM section and constant frequency section. They have the same duration, yet in two adjacent small cycles the two LFM slopes are opposite sign and different size. Then the two adjacent LFM bandwidths are unequal. Within a determinate frequency range, the constant frequencies are modulated by a unique PN code sequence for different automotive radar in a big period. Corresponding to the improved waveform, which combines the advantages of both FSK and FMCW formats, a judgment algorithm is used in the continuous small cycle to further eliminate the false targets. The combination of unambiguous ranges and relative velocities can confirm and cancel most false targets in two adjacent small cycles.", "title": "" }, { "docid": "neg:1840264_16", "text": "BACKGROUND AND PURPOSE\nThe primary objective of this study is to establish the validity and reliability of a perceived medication knowledge and confidence survey instrument (Okere-Renier Survey).\n\n\nMETHODS\nTwo-stage psychometric analyses were conducted to assess reliability (Cronbach's alpha > .70) of the associated knowledge scale. To evaluate the construct validity, exploratory and confirmatory factor analyses were performed.\n\n\nRESULTS\nExploratory factor analysis (EFA) revealed three subscale measures and confirmatory factor analysis (CFA) indicated an acceptable fit to the data (goodness-of-fit index [GFI = 0.962], adjusted goodness-of-fit index [AGFI = 0.919], root mean square residual [RMR = 0.065], root mean square error of approximation [RMSEA] = 0.073). A high internal consistency with Cronbach's a of .833 and .744 were observed in study Stages 1 and 2, respectively.\n\n\nCONCLUSIONS\nThe Okere-Renier Survey is a reliable instrument for predicting patient-perceived level of medication knowledge and confidence.", "title": "" }, { "docid": "neg:1840264_17", "text": "Conversational systems have come a long way since their inception in the 1960s. After decades of research and development, we have seen progress from Eliza and Parry in the 1960s and 1970s, to task-completion systems as in the Defense Advanced Research Projects Agency (DARPA) communicator program in the 2000s, to intelligent personal assistants such as Siri, in the 2010s, to today’s social chatbots like XiaoIce. Social chatbots’ appeal lies not only in their ability to respond to users’ diverse requests, but also in being able to establish an emotional connection with users. The latter is done by satisfying users’ need for communication, affection, as well as social belonging. To further the advancement and adoption of social chatbots, their design must focus on user engagement and take both intellectual quotient (IQ) and emotional quotient (EQ) into account. Users should want to engage with a social chatbot; as such, we define the success metric for social chatbots as conversation-turns per session (CPS). Using XiaoIce as an illustrative example, we discuss key technologies in building social chatbots from core chat to visual awareness to skills. We also show how XiaoIce can dynamically recognize emotion and engage the user throughout long conversations with appropriate interpersonal responses. As we become the first generation of humans ever living with artificial intelligenc (AI), we have a responsibility to design social chatbots to be both useful and empathetic, so they will become ubiquitous and help society as a whole.", "title": "" }, { "docid": "neg:1840264_18", "text": "Visual object tracking is a challenging computer vision problem with numerous real-world applications. This paper investigates the impact of convolutional features for the visual tracking problem. We propose to use activations from the convolutional layer of a CNN in discriminative correlation filter based tracking frameworks. These activations have several advantages compared to the standard deep features (fully connected layers). Firstly, they miti-gate the need of task specific fine-tuning. Secondly, they contain structural information crucial for the tracking problem. Lastly, these activations have low dimensionality. We perform comprehensive experiments on three benchmark datasets: OTB, ALOV300++ and the recently introduced VOT2015. Surprisingly, different to image classification, our results suggest that activations from the first layer provide superior tracking performance compared to the deeper layers. Our results further show that the convolutional features provide improved results compared to standard hand-crafted features. Finally, results comparable to state-of-the-art trackers are obtained on all three benchmark datasets.", "title": "" }, { "docid": "neg:1840264_19", "text": "As the heart of an aircraft, the aircraft engine's condition directly affects the safety, reliability, and operation of the aircraft. Prognostics and health management for aircraft engines can provide advance warning of failure and estimate the remaining useful life. However, aircraft engine systems are complex with both intangible and uncertain factors, it is difficult to model the complex degradation process, and no single prognostic approach can effectively solve this critical and complicated problem. Thus, fusion prognostics is conducted to obtain more accurate prognostics results. In this paper, a prognostics and health management-oriented integrated fusion prognostic framework is developed to improve the system state forecasting accuracy. This framework strategically fuses the monitoring sensor data and integrates the strengths of the data-driven prognostics approach and the experience-based approach while reducing their respective limitations. As an application example, this developed fusion prognostics framework is employed to predict the remaining useful life of an aircraft gas turbine engine based on sensor data. The results demonstrate that the proposed fusion prognostics framework is an effective prognostics tool, which can provide a more accurate and robust remaining useful life estimation than any single prognostics method.", "title": "" } ]
1840265
A Framework for Investigating the Impact of Information Systems Capability on Strategic Information Systems Planning Outcomes
[ { "docid": "pos:1840265_0", "text": "1 This article was reviewed and accepted by all the senior editors, including the editor-in-chief. Articles published in future issues will be accepted by just a single senior editor, based on reviews by members of the Editorial Board. 2 Sincere thanks go to Anna Dekker and Denyse O’Leary for their assistance with this research. Funding was generously provided by the Advanced Practices Council of the Society for Information Management and by the Social Sciences and Humanities Research Council of Canada. An earlier version of this manuscript was presented at the Academy of Management Conference in Toronto, Canada, in August 2000. 3 In this article, the terms information systems (IS) and information technology (IT) are used interchangeably. 4 Regardless of whether IS services are provided internally (in a centralized, decentralized, or federal manner) or are outsourced, we assume the boundaries of the IS function can be identified. Thus, the fit between the unit(s) providing IS services and the rest of the organization can be examined. and books have been written on the subject, firms continue to demonstrate limited alignment.", "title": "" } ]
[ { "docid": "neg:1840265_0", "text": "We investigate regrets associated with users' posts on a popular social networking site. Our findings are based on a series of interviews, user diaries, and online surveys involving 569 American Facebook users. Their regrets revolved around sensitive topics, content with strong sentiment, lies, and secrets. Our research reveals several possible causes of why users make posts that they later regret: (1) they want to be perceived in favorable ways, (2) they do not think about their reason for posting or the consequences of their posts, (3) they misjudge the culture and norms within their social circles, (4) they are in a \"hot\" state of high emotion when posting, or under the influence of drugs or alcohol, (5) their postings are seen by an unintended audience, (6) they do not foresee how their posts could be perceived by people within their intended audience, and (7) they misunderstand or misuse the Facebook platform. Some reported incidents had serious repercussions, such as breaking up relationships or job losses. We discuss methodological considerations in studying negative experiences associated with social networking posts, as well as ways of helping users of social networking sites avoid such regrets.", "title": "" }, { "docid": "neg:1840265_1", "text": "We propose a vision-based method that localizes a ground vehicle using publicly available satellite imagery as the only prior knowledge of the environment. Our approach takes as input a sequence of ground-level images acquired by the vehicle as it navigates, and outputs an estimate of the vehicle's pose relative to a georeferenced satellite image. We overcome the significant viewpoint and appearance variations between the images through a neural multi-view model that learns location-discriminative embeddings in which ground-level images are matched with their corresponding satellite view of the scene. We use this learned function as an observation model in a filtering framework to maintain a distribution over the vehicle's pose. We evaluate our method on different benchmark datasets and demonstrate its ability localize ground-level images in environments novel relative to training, despite the challenges of significant viewpoint and appearance variations.", "title": "" }, { "docid": "neg:1840265_2", "text": "Maximum power point tracking (MPPT) is a very important necessity in a system of energy conversion from a renewable energy source. Many research papers have been produced with various schemes over past decades for the MPPT in photovoltaic (PV) system. This research paper inspires its motivation from the fact that the keen study of these existing techniques reveals that there is still quite a need for an absolutely generic and yet very simple MPPT controller which should have all the following traits: total independence from system's parameters, ability to reach the global maxima in minimal possible steps, the correct sense of tracking direction despite the abrupt atmospheric or parametrical changes, and finally having a very cost-effective and energy efficient hardware with the complexity no more than that of a minimal MPPT algorithm like Perturb and Observe (P&O). The MPPT controller presented in this paper is a successful attempt to fulfil all these requirements. It extends the MPPT techniques found in the recent research papers with some innovations in the control algorithm and a simplistic hardware. The simulation results confirm that the proposed MPPT controller is very fast, very efficient, very simple and low cost as compared to the contemporary ones.", "title": "" }, { "docid": "neg:1840265_3", "text": "OBJECTIVE\nFirearm violence is a significant public health problem in the United States, and alcohol is frequently involved. This article reviews existing research on the relationships between alcohol misuse; ownership, access to, and use of firearms; and the commission of firearm violence, and discusses the policy implications of these findings.\n\n\nMETHOD\nNarrative review augmented by new tabulations of publicly-available data.\n\n\nRESULTS\nAcute and chronic alcohol misuse is positively associated with firearm ownership, risk behaviors involving firearms, and risk for perpetrating both interpersonal and self-directed firearm violence. In an average month, an estimated 8.9 to 11.7 million firearm owners binge drink. For men, deaths from alcohol-related firearm violence equal those from alcohol-related motor vehicle crashes. Enforceable policies restricting access to firearms for persons who misuse alcohol are uncommon. Policies that restrict access on the basis of other risk factors have been shown to reduce risk for subsequent violence.\n\n\nCONCLUSION\nThe evidence suggests that restricting access to firearms for persons with a documented history of alcohol misuse would be an effective violence prevention measure. Restrictions should rely on unambiguous definitions of alcohol misuse to facilitate enforcement and should be rigorously evaluated.", "title": "" }, { "docid": "neg:1840265_4", "text": "The N×N queen&apos;s puzzle is the problem of placing N chess queen on an N×N chess board so that no two queens attack each other. This approach is a classical problem in the artificial intelligence area. A solution requires that no two queens share the same row, column or diagonal. These problems for computer scientists present practical solution to many useful applications and have become an important issue. In this paper we proposed new resolution for solving n-Queens used combination of depth firs search (DFS) and breathe first search (BFS) techniques. The proposed algorithm act based on placing queens on chess board directly. This is possible by regular pattern on the basis of the work law of minister. The results show that performance and run time in this approach better then back tracking methods and hill climbing modes.", "title": "" }, { "docid": "neg:1840265_5", "text": "We propose a novel type inference technique for Python programs. Type inference is difficult for Python programs due to their heavy dependence on external APIs and the dynamic language features. We observe that Python source code often contains a lot of type hints such as attribute accesses and variable names. However, such type hints are not reliable. We hence propose to use probabilistic inference to allow the beliefs of individual type hints to be propagated, aggregated, and eventually converge on probabilities of variable types. Our results show that our technique substantially outperforms a state-of-the-art Python type inference engine based on abstract interpretation.", "title": "" }, { "docid": "neg:1840265_6", "text": "This paper presents a motion planning method for mobile manipulators for which the base locomotion is less precise than the manipulator control. In such a case, it is advisable to move the base to discrete poses from which the manipulator can be deployed to cover a prescribed trajectory. The proposed method finds base poses that not only cover the trajectory but also meet constraints on a measure of manipulability. We propose a variant of the conventional manipulability measure that is suited to the trajectory control of the end effector of the mobile manipulator along an arbitrary curve in three space. Results with implementation on a mobile manipulator are discussed.", "title": "" }, { "docid": "neg:1840265_7", "text": "A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.", "title": "" }, { "docid": "neg:1840265_8", "text": "Structural gene rearrangements resulting in gene fusions are frequent events in solid tumours. The identification of certain activating fusions can aid in the diagnosis and effective treatment of patients with tumours harbouring these alterations. Advances in the techniques used to identify fusions have enabled physicians to detect these alterations in the clinic. Targeted therapies directed at constitutively activated oncogenic tyrosine kinases have proven remarkably effective against cancers with fusions involving ALK, ROS1, or PDGFB, and the efficacy of this approach continues to be explored in malignancies with RET, NTRK1/2/3, FGFR1/2/3, and BRAF/CRAF fusions. Nevertheless, prolonged treatment with such tyrosine-kinase inhibitors (TKIs) leads to the development of acquired resistance to therapy. This resistance can be mediated by mutations that alter drug binding, or by the activation of bypass pathways. Second-generation and third-generation TKIs have been developed to overcome resistance, and have variable levels of activity against tumours harbouring individual mutations that confer resistance to first-generation TKIs. The rational sequential administration of different inhibitors is emerging as a new treatment paradigm for patients with tumours that retain continued dependency on the downstream kinase of interest.", "title": "" }, { "docid": "neg:1840265_9", "text": "The latest version of the ISO 26262 standard from 2016 represents the state of the art for a safety-guided development of safety-critical electric/electronic vehicle systems. These vehicle systems include advanced driver assistance systems and vehicle guidance systems. The development process proposed in the ISO 26262 standard is based upon multiple V-models, and defines activities and work products for each process step. In many of these process steps, scenario based approaches can be applied to achieve the defined work products for the development of automated driving functions. To accomplish the work products of different process steps, scenarios have to focus on various aspects like a human understandable notation or a description via state variables. This leads to contradictory requirements regarding the level of detail and way of notation for the representation of scenarios. In this paper, the authors discuss requirements for the representation of scenarios in different process steps defined by the ISO 26262 standard, propose a consistent terminology based on prior publications for the identified levels of abstraction, and demonstrate how scenarios can be systematically evolved along the phases of the development process outlined in the ISO 26262 standard.", "title": "" }, { "docid": "neg:1840265_10", "text": "Classroom Salon is an on-line social collaboration tool that allows instructors to create, manage, and analyze social net- works (called Salons) to enhance student learning. Students in a Salon can cooperatively create, comment on, and modify documents. Classroom Salon provides tools that allow the instructor to monitor the social networks and gauge both student participation and individual effectiveness. This pa- per describes Classroom Salon, provides some use cases that we have developed for introductory computer science classes and presents some preliminary observations of using this tool in several computer science courses at Carnegie Mellon University.", "title": "" }, { "docid": "neg:1840265_11", "text": "Maxout network is a powerful alternate to traditional sigmoid neural networks and is showing success in speech recognition. However, maxout network is prone to overfitting thus regularization methods such as dropout are often needed. In this paper, a stochastic pooling regularization method for max-out networks is proposed to control overfitting. In stochastic pooling, a distribution is produced for each pooling region by the softmax normalization of the piece values. The active piece is selected based on the distribution during training, and an effective probability weighting is conducted during testing. We apply the stochastic pooling maxout (SPM) networks within the DNN-HMM framework and evaluate its effectiveness under a low-resource speech recognition condition. On benchmark test sets, the SPM network yields 4.7-8.6% relative improvements over the baseline maxout network. Further evaluations show the superiority of stochastic pooling over dropout for low-resource speech recognition.", "title": "" }, { "docid": "neg:1840265_12", "text": "Depth estimation in computer vision and robotics is most commonly done via stereo vision (stereopsis), in which images from two cameras are used to triangulate and estimate distances. However, there are also numerous monocular visual cues— such as texture variations and gradients, defocus, color/haze, etc.—that have heretofore been little exploited in such systems. Some of these cues apply even in regions without texture, where stereo would work poorly. In this paper, we apply a Markov Random Field (MRF) learning algorithm to capture some of these monocular cues, and incorporate them into a stereo system. We show that by adding monocular cues to stereo (triangulation) ones, we obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone. This holds true for a large variety of environments, including both indoor environments and unstructured outdoor environments containing trees/forests, buildings, etc. Our approach is general, and applies to incorporating monocular cues together with any off-the-shelf stereo system.", "title": "" }, { "docid": "neg:1840265_13", "text": "Three-dimensional virtual worlds are an emerging medium currently being used in both traditional classrooms and for distance education. Three-dimensional (3D) virtual worlds are a combination of desk-top interactive Virtual Reality within a chat environment. This analysis provides an overview of Active Worlds Educational Universe and Adobe Atmosphere and the pedagogical affordances and constraints of the inscription tools, discourse tools, experiential tools, and resource tools of each application. The purpose of this review is to discuss the implications of using each application for educational initiatives by exploring how the various design features of each may support and enhance the design of interactive learning environments.", "title": "" }, { "docid": "neg:1840265_14", "text": "An object recognition engine needs to extract discriminative features from data representing an object and accurately classify the object to be of practical use in robotics. Furthermore, the classification of the object must be rapidly performed in the presence of a voluminous stream of data. These conditions call for a distributed and scalable architecture that can utilize a cloud computing infrastructure for performing object recognition. This paper introduces a Cloud-based Object Recognition Engine (CORE) to address these needs. CORE is able to train on large-scale datasets, perform classification of 3D point cloud data, and efficiently transfer data in a robotic network.", "title": "" }, { "docid": "neg:1840265_15", "text": "Proximity effects caused by uneven distribution of current among the insulated wire strands of stator multi-strand windings can contribute significant bundle-level proximity losses in permanent magnet (PM) machines operating at high speeds. Three-dimensional finite element analysis is used to investigate the effects of transposition of the insulated strands in stator winding bundles on the copper losses in high-speed machines. The investigation confirms that the bundle proximity losses must be considered in the design of stator windings for high-speed machines, and the amplitude of these losses decreases monotonically as the level of transposition is increased from untransposed to fully-transposed (360°) wire bundles. Analytical models are introduced to estimate the currents in strands in a slot for a high-speed machine.", "title": "" }, { "docid": "neg:1840265_16", "text": "In this paper, the relationship between the numbers of stator slots, winding polarities, and rotor poles for variable reluctance resolvers is derived and verified, which makes it possible for the same stator and winding to be shared by the rotors with different poles. Based on the established relationship, a simple factor is introduced to evaluate the level of voltage harmonics as an index for choosing appropriate stator slot and rotor pole combinations. With due account for easy manufacturing, alternate windings are proposed without apparent deterioration in voltage harmonics of a resolver. In particular, alternate windings with nonoverlapping uniform coils are proved to be possible for output windings in some stator slot and rotor pole combinations, which further simplify the manufacture process. Finite element method is adopted to verify the proposed design, together with experiments on the prototypes.", "title": "" }, { "docid": "neg:1840265_17", "text": "A reconfigurable mechanism for varying the footprint of a four-wheeled omnidirectional vehicle is developed and applied to wheelchairs. The variable footprint mechanism consists of a pair of beams intersecting at a pivotal point in the middle. Two pairs of ball wheels at the diagonal positions of the vehicle chassis are mounted, respectively, on the two beams intersecting in the middle. The angle between the two beams varies actively so that the ratio of the wheel base to the tread may change. Four independent servo motors driving the four ball wheels allow the vehicle to move in an arbitrary direction from an arbitrary configuration as well as to change the angle between the two beams and thereby change the footprint. The objective of controlling the beam angle is threefold. One is to augment static stability by varying the footprint so that the mass centroid of the vehicle may be kept within the footprint at all times. The second is to reduce the width of the vehicle when going through a narrow doorway. The third is to apparently change the gear ratio relating the vehicle speed to individual actuator speeds. First the concept of the varying footprint mechanism is described, and its kinematic behavior is analyzed, followed by the three control algorithms for varying the footprint. A prototype vehicle for an application as a wheelchair platform is designed, built, and tested.", "title": "" }, { "docid": "neg:1840265_18", "text": "Single-unit recording studies in the macaque have carefully documented the modulatory effects of attention on the response properties of visual cortical neurons. Attention produces qualitatively different effects on firing rate, depending on whether a stimulus appears alone or accompanied by distracters. Studies of contrast gain control in anesthetized mammals have found parallel patterns of results when the luminance contrast of a stimulus increases. This finding suggests that attention has co-opted the circuits that mediate contrast gain control and that it operates by increasing the effective contrast of the attended stimulus. Consistent with this idea, microstimulation of the frontal eye fields, one of several areas that control the allocation of spatial attention, induces spatially local increases in sensitivity both at the behavioral level and among neurons in area V4, where endogenously generated attention increases contrast sensitivity. Studies in the slice have begun to explain how modulatory signals might cause such increases in sensitivity.", "title": "" }, { "docid": "neg:1840265_19", "text": "We introduce a new count-based optimistic exploration algorithm for reinforcement learning (RL) that is feasible in environments with highdimensional state-action spaces. The success of RL algorithms in these domains depends crucially on generalisation from limited training experience. Function approximation techniques enable RL agents to generalise in order to estimate the value of unvisited states, but at present few methods enable generalisation regarding uncertainty. This has prevented the combination of scalable RL algorithms with efficient exploration strategies that drive the agent to reduce its uncertainty. We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state. Our φ-pseudocount achieves generalisation by exploiting the same feature representation of the state space that is used for value function approximation. States that have less frequently observed features are deemed more uncertain. The φ-ExplorationBonus algorithm rewards the agent for exploring in feature space rather than in the untransformed state space. The method is simpler and less computationally expensive than some previous proposals, and achieves near state-of-the-art results on highdimensional RL benchmarks.", "title": "" } ]
1840266
Deep Learning For Video Saliency Detection
[ { "docid": "pos:1840266_0", "text": "Modeling visual attention-particularly stimulus-driven, saliency-based attention-has been a very active research area over the past 25 years. Many different models of attention are now available which, aside from lending theoretical contributions to other fields, have demonstrated successful applications in computer vision, mobile robotics, and cognitive systems. Here we review, from a computational perspective, the basic concepts of attention implemented in these models. We present a taxonomy of nearly 65 models, which provides a critical comparison of approaches, their capabilities, and shortcomings. In particular, 13 criteria derived from behavioral and computational studies are formulated for qualitative comparison of attention models. Furthermore, we address several challenging issues with models, including biological plausibility of the computations, correlation with eye movement datasets, bottom-up and top-down dissociation, and constructing meaningful performance measures. Finally, we highlight current research trends in attention modeling and provide insights for future.", "title": "" }, { "docid": "pos:1840266_1", "text": "In this paper, we study the challenging problem of tracking the trajectory of a moving object in a video with possibly very complex background. In contrast to most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning architectures, by putting more emphasis on the (unsupervised) feature learning problem. Specifically, by using auxiliary natural images, we train a stacked denoising autoencoder offline to learn generic image features that are more robust against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural network which is constructed from the encoder part of the trained autoencoder as a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the moving object. Comparison with the state-of-the-art trackers on some challenging benchmark video sequences shows that our deep learning tracker is more accurate while maintaining low computational cost with real-time performance when our MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU).", "title": "" } ]
[ { "docid": "neg:1840266_0", "text": "Recognizing sarcasm often requires a deep understanding of multiple sources of information, including the utterance, the conversational context, and real world facts. Most of the current sarcasm detection systems consider only the utterance in isolation. There are some limited attempts toward taking into account the conversational context. In this paper, we propose an interpretable end-to-end model that combines information from both the utterance and the conversational context to detect sarcasm, and demonstrate its effectiveness through empirical evaluations. We also study the behavior of the proposed model to provide explanations for the model’s decisions. Importantly, our model is capable of determining the impact of utterance and conversational context on the model’s decisions. Finally, we provide an ablation study to illustrate the impact of different components of the proposed model.", "title": "" }, { "docid": "neg:1840266_1", "text": "We introduce a stochastic discrete automaton model to simulate freeway traffic. Monte-Carlo simulations of the model show a transition from laminar traffic flow to start-stopwaves with increasing vehicle density, as is observed in real freeway traffic. For special cases analytical results can be obtained.", "title": "" }, { "docid": "neg:1840266_2", "text": "Imagining urban space as being comfortable or fearful is studied as an effect of people’s connections to their residential area communication infrastructure. Geographic Information System (GIS) modeling and spatial-statistical methods are used to process 215 mental maps obtained from respondents to a multilingual survey of seven ethnically marked residential communities of Los Angeles. Spatial-statistical analyses reveal that fear perceptions of Los Angeles urban space are not associated with commonly expected causes of fear, such as high crime victimization likelihood. The main source of discomfort seems to be presence of non-White and non-Asian populations. Respondents more strongly connected to television and interpersonal communication channels are relatively more fearful of these populations than those less strongly connected. Theoretical, methodological, and community-building policy implications are discussed.", "title": "" }, { "docid": "neg:1840266_3", "text": "This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement. The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration estimation, as well as semantic labeling of the input point cloud. The initial estimation is then refined by directly fitting the body configuration with the observation (e.g., the input depth). In addition to the new system architecture, our other contributions include modifying a point cloud smoothing technique to deal with very noisy input depth maps, a point cloud alignment and pose search algorithm that is view-independent and efficient. Experiments on a public dataset show that our approach achieves significantly higher accuracy than previous state-of-art methods.", "title": "" }, { "docid": "neg:1840266_4", "text": "Air quality monitoring is extremely important as air pollution has a direct impact on human health. Low-cost gas sensors are used to effectively perceive the environment by mounting them on top of mobile vehicles, for example, using a public transport network. Thus, these sensors are part of a mobile network and perform from time to time measurements in each others vicinity. In this paper, we study three calibration algorithms that exploit co-located sensor measurements to enhance sensor calibration and consequently the quality of the pollution measurements on-the-fly. Forward calibration, based on a traditional approach widely used in the literature, is used as performance benchmark for two novel algorithms: backward and instant calibration. We validate all three algorithms with real ozone pollution measurements carried out in an urban setting by comparing gas sensor output to high-quality measurements from analytical instruments. We find that both backward and instant calibration reduce the average measurement error by a factor of two compared to forward calibration. Furthermore, we unveil the arising difficulties if sensor calibration is not based on reliable reference measurements but on sensor readings of low-cost gas sensors which is inevitable in a mobile scenario with only a few reliable sensors. We propose a solution and evaluate its effect on the measurement accuracy in experiments and simulation.", "title": "" }, { "docid": "neg:1840266_5", "text": "Improving Neural Networks with Dropout Nitish Srivastava Master of Science Graduate Department of Computer Science University of Toronto 2013 Deep neural nets with a huge number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from a neural network during training. This prevents the units from co-adapting too much. Dropping units creates thinned networks during training. The number of possible thinned networks is exponential in the number of units in the network. At test time all possible thinned networks are combined using an approximate model averaging procedure. Dropout training followed by this approximate model combination significantly reduces overfitting and gives major improvements over other regularization methods. In this work, we describe models that improve the performance of neural networks using dropout, often obtaining state-of-the-art results on benchmark datasets.", "title": "" }, { "docid": "neg:1840266_6", "text": "This paper overviews various switched flux permanent magnet machines and their design and performance features, with particular emphasis on machine topologies with reduced magnet usage or without using magnet, as well as with variable flux capability. In addition, this paper also describes their relationships with doubly-salient permanent magnet machines and flux reversal permanent magnet machines.", "title": "" }, { "docid": "neg:1840266_7", "text": "Fundamental and advanced developments in neum-fuzzy synergisms for modeling and control are reviewed. The essential part of neuro-fuuy synergisms comes from a common framework called adaptive networks, which unifies both neural networks and fuzzy models. The f u u y models under the framework of adaptive networks is called Adaptive-Network-based Fuzzy Inference System (ANFIS), which possess certain advantages over neural networks. We introduce the design methods f o r ANFIS in both modeling and control applications. Current problems and future directions for neuro-fuzzy approaches are also addressed.", "title": "" }, { "docid": "neg:1840266_8", "text": "Clinical data describing the phenotypes and treatment of patients represents an underused data source that has much greater research potential than is currently realized. Mining of electronic health records (EHRs) has the potential for establishing new patient-stratification principles and for revealing unknown disease correlations. Integrating EHR data with genetic data will also give a finer understanding of genotype–phenotype relationships. However, a broad range of ethical, legal and technical reasons currently hinder the systematic deposition of these data in EHRs and their mining. Here, we consider the potential for furthering medical research and clinical care using EHR data and the challenges that must be overcome before this is a reality.", "title": "" }, { "docid": "neg:1840266_9", "text": "Distributed Denial-of-Service (DDoS) attacks are increasing in frequency and volume on the Internet, and there is evidence that cyber-criminals are turning to Internet-of-Things (IoT) devices such as cameras and vending machines as easy launchpads for large-scale attacks. This paper quantifies the capability of consumer IoT devices to participate in reflective DDoS attacks. We first show that household devices can be exposed to Internet reflection even if they are secured behind home gateways. We then evaluate eight household devices available on the market today, including lightbulbs, webcams, and printers, and experimentally profile their reflective capability, amplification factor, duration, and intensity rate for TCP, SNMP, and SSDP based attacks. Lastly, we demonstrate reflection attacks in a real-world setting involving three IoT-equipped smart-homes, emphasising the imminent need to address this problem before it becomes widespread.", "title": "" }, { "docid": "neg:1840266_10", "text": "Most of the samples discovered are variations of known malicious programs and thus have similar structures, however, there is no method of malware classification that is completely effective. To address this issue, the approach proposed in this paper represents a malware in terms of a vector, in which each feature consists of the amount of APIs called from a Dynamic Link Library (DLL). To determine if this approach is useful to classify malware variants into the correct families, we employ Euclidean Distance and a Multilayer Perceptron with several learning algorithms. The experimental results are analyzed to determine which method works best with the approach. The experiments were conducted with a database that contains real samples of worms and trojans and show that is possible to classify malware variants using the number of functions imported per library. However, the accuracy varies depending on the method used for the classification.", "title": "" }, { "docid": "neg:1840266_11", "text": "This paper is concerned with the problem of domain adaptation with multiple sources from a causal point of view. In particular, we use causal models to represent the relationship between the features X and class label Y , and consider possible situations where different modules of the causal model change with the domain. In each situation, we investigate what knowledge is appropriate to transfer and find the optimal target-domain hypothesis. This gives an intuitive interpretation of the assumptions underlying certain previous methods and motivates new ones. We finally focus on the case where Y is the cause for X with changing PY and PX|Y , that is, PY and PX|Y change independently across domains. Under appropriate assumptions, the availability of multiple source domains allows a natural way to reconstruct the conditional distribution on the target domain; we propose to model PX|Y (the process to generate effect X from cause Y ) on the target domain as a linear mixture of those on source domains, and estimate all involved parameters by matching the target-domain feature distribution. Experimental results on both synthetic and real-world data verify our theoretical results. Traditional machine learning relies on the assumption that both training and test data are from the same distribution. In practice, however, training and test data are probably sampled under different conditions, thus violating this assumption, and the problem of domain adaptation (DA) arises. Consider remote sensing image classification as an example. Suppose we already have several data sets on which the class labels are known; they are called source domains here. For a new data set, or a target domain, it is usually difficult to find the ground truth reference labels, and we aim to determine the labels by making use of the information from the source domains. Note that those domains are usually obtained in different areas and time periods, and that the corresponding data distribution various due to the change in illumination conditions, physical factors related to ground (e.g., different soil moisture or composition), vegetation, and atmospheric conditions. Other well-known instances of this situation include sentiment data analysis (Blitzer, Dredze, and Pereira 2007) and flow cytometry data analysis (Blanchard, Lee, and Scott 2011). DA approaches have Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. many applications in varies areas including natural language processing, computer vision, and biology. For surveys on DA, see, e.g., (Jiang 2008; Pan and Yang 2010; Candela et al. 2009). In this paper, we consider the situation with n source domains on which both the features X and label Y are given, i.e., we are given (x,y) = (x k , y (i) k ) mi k=1, where i = 1, ..., n, and mi is the sample size of the ith source domain. Our goal is to find the classifier for the target domain, on which only the features x = (xk) m k=1 are available. Here we are concerned with a difficult scenario where no labeled point is available in the target domain, known as unsupervised domain adaptation. Since PXY changes across domains, we have to find what knowledge in the source domains should be transferred to the target one. Previous work in domain adaptation has usually assumed that PX changes but PY |X remain the same, i.e., the covariate shift situation; see, e.g., (Shimodaira 2000; Huang et al. 2007; Sugiyama et al. 2008; Ben-David, Shalev-Shwartz, and Urner 2012). It is also known as sample selection bias (particularly on the features X) in (Zadrozny 2004). In practice it is very often that both PX and PY |X change simultaneously across domains. For instance, both of them are likely to change over time and location for a satellite image classification system. If the data distribution changes arbitrarily across domains, clearly knowledge from the sources may not help in predicting Y on the target domain (Rosenstein et al. 2005). One has to find what type of information should be transferred from sources to the target. One possibility is to assume the change in both PX and PY |X is due to the change in PY , while PX|Y remains the same, as known as prior probability shift (Storkey 2009; Plessis and Sugiyama 2012) or target shift (Zhang et al. 2013). The latter further models the change in PX|Y caused by a location-scale (LS) transformation of the features for each class. The constraint of the LS transformation renders PX|Y on the target domain, denoted by P t X|Y , identifiable; however, it might be too restrictive. Fortunately, the availability of multiple source domains provides more hints as to find P t X|Y , as well as P t Y |X . Several algorithms have been proposed to combine knowledge from multiple source domains. For instance, (Mansour, Mohri, and Rostamizadeh 2008) proposed to form the target hypothesis by combining source hypotheses with a distribution weighted rule. (Gao et al. 2008), (Duan et al. 2009), and (Chattopadhyay et al. 2011) combine the predictions made by the source hypotheses, with the weights determined in different ways. An intuitive interpretation of the assumptions underlying those algorithms would facilitate choosing or developing DA methods for the problem at hand. To the best of our knowledge, however, it is still missing in the literature. One of our contributions in this paper is to provide such an interpretation. This paper studies the multi-source DA problem from a causal point of view where we consider the underlying data generating process behind the observed domains. We are particularly interested in what types of information stay the same, what types of information change, and how they change across domains. This enables us to construct the optimal hypothesis for the target domain in various situations. To this end, we use causal models to represent the relationship between X and Y , because they provide a compact description of the properties of the change in the data distribution.1 They, for instance, help characterize transportability of experimental findings (Pearl and Bareinboim 2011) or recoverability from selection bias (Bareinboim, Tian, and Pearl 2014). As another contribution, we further focus on a typical DA scenario where both PY and PX|Y (or the causal mechanism to generate effect X from cause Y ) change across domains, but their changes are independent from each other, as implied by the causal model Y → X . We assume that the source domains contains rich information such that for each class, P t X|Y can be approximated by a linear mixture of PX|Y on source domains. Together with other mild conditions on PX|Y , we then show that P t X|Y , as well as P t Y , is identifiable (or can be uniquely recovered). We present a computationally efficient method to estimate the involved parameters based on kernel mean distribution embedding (Smola et al. 2007; Gretton et al. 2007), followed by several approaches to constructing the target classifier using those parameters. One might wonder how to find the causal information underlying the data to facilitate domain adaptation. We note that in practice, background causal knowledge is usually available, helping formulating how to transfer the knowledge from source domains to the target. Even if this is not the case, multiple source domains with different data distributions may allow one to identify the causal structure, since the causal knowledge can be seen from the change in data distributions; see e.g., (Tian and Pearl 2001). 1 Possible DA Situations and Their Solutions DA can be considered as a learning problem in nonstationary environments (Sugiyama and Kawanabe 2012). It is helpful to find how the data distribution changes; it provides the clues as to find the learning machine for the target domain. The causal model also describes how the components of the joint distribution are related to each other, which, for instance, gives a causal explanation of the behavior of semi-supervised learning (Schölkopf et al. 2012). Table 1: Notation used in this paper. X , Y random variables X , Y domains", "title": "" }, { "docid": "neg:1840266_12", "text": "Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today’s most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learning algorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation. The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing.", "title": "" }, { "docid": "neg:1840266_13", "text": "Psychedelic drugs such as LSD and psilocybin are often claimed to be capable of inducing life-changing experiences described as mystical or transcendental, especially if high doses are taken. The present study examined possible enduring effects of such experiences by comparing users of psychedelic drugs (n = 88), users of nonpsychedelic illegal drugs (e.g., marijuana, amphetamines) (n = 29) and non illicit drug-using social drinkers (n = 66) on questionnaire measures of values, beliefs and emotional empathy. Samples were obtained from Israel (n = 110) and Australia (n = 73) in a cross-cultural comparison to see if values associated with psychedelic drug use transcended culture of origin. Psychedelic users scored significantly higher on mystical beliefs (e.g., oneness with God and the universe) and life values of spirituality and concern for others than the other groups, and lower on the value of financial prosperity, irrespective of culture of origin. Users of nonpsychedelic illegal drugs scored significantly lower on a measure of coping ability than both psychedelic users and non illicit drug users. Both groups of illegal drug users scored significantly higher on empathy than non illicit drug users. Results are discussed in the context of earlier findings from Pahnke (1966) and Doblin (1991) of the transformative effect of psychedelic experiences, although the possibility remains that present findings reflect predrug characteristics of those who chose to take psychedelic drugs rather than effects of the drugs themselves.", "title": "" }, { "docid": "neg:1840266_14", "text": "A novel topology for a soft-switching buck dc– dc converter with a coupled inductor is proposed. The soft-switching buck converter has advantages over the traditional hardswitching converters. The most significant advantage is that it offers a lower switching loss. This converter operates under a zero-current switching condition at turn on and a zero-voltage switching condition at turn off. It presents the circuit configuration with a least components for realizing soft switching. Because of soft switching, the proposed converter can attain a high efficiency under heavy load conditions. Likewise, a high efficiency is also attained under light load conditions, which is significantly different from other soft switching buck converters", "title": "" }, { "docid": "neg:1840266_15", "text": "The purpose of this study is to characterize and understand the long-term behavior of the output from megavoltage radiotherapy linear accelerators. Output trends of nine beams from three linear accelerators over a period of more than three years are reported and analyzed. Output, taken during daily warm-up, forms the basis of this study. The output is measured using devices having ion chambers. These are not calibrated by accredited dosimetry laboratory, but are baseline-compared against monthly output which is measured using calibrated ion chambers. We consider the output from the daily check devices as it is, and sometimes normalized it by the actual output measured during the monthly calibration of the linacs. The data show noisy quasi-periodic behavior. The output variation, if normalized by monthly measured \"real' output, is bounded between ± 3%. Beams of different energies from the same linac are correlated with a correlation coefficient as high as 0.97, for one particular linac, and as low as 0.44 for another. These maximum and minimum correlations drop to 0.78 and 0.25 when daily output is normalized by the monthly measurements. These results suggest that the origin of these correlations is both the linacs and the daily output check devices. Beams from different linacs, independent of their energies, have lower correlation coefficient, with a maximum of about 0.50 and a minimum of almost zero. The maximum correlation drops to almost zero if the output is normalized by the monthly measured output. Some scatter plots of pairs of beam output from the same linac show band-like structures. These structures are blurred when the output is normalized by the monthly calibrated output. Fourier decomposition of the quasi-periodic output is consistent with a 1/f power law. The output variation appears to come from a distorted normal distribution with a mean of slightly greater than unity. The quasi-periodic behavior is manifested in the seasonally averaged output, showing annual variability with negative variations in the winter and positive in the summer. This trend is weakened when the daily output is normalized by the monthly calibrated output, indicating that the variation of the periodic component may be intrinsic to both the linacs and the daily measurement devices. Actual linac output was measured monthly. It needs to be adjusted once every three to six months for our tolerance and action levels. If these adjustments are artificially removed, then there is an increase in output of about 2%-4% per year.", "title": "" }, { "docid": "neg:1840266_16", "text": "Citrus fruits have potential health-promoting properties and their essential oils have long been used in several applications. Due to biological effects described to some citrus species in this study our objectives were to analyze and compare the phytochemical composition and evaluate the anti-inflammatory effect of essential oils (EO) obtained from four different Citrus species. Mice were treated with EO obtained from C. limon, C. latifolia, C. aurantifolia or C. limonia (10 to 100 mg/kg, p.o.) and their anti-inflammatory effects were evaluated in chemical induced inflammation (formalin-induced licking response) and carrageenan-induced inflammation in the subcutaneous air pouch model. A possible antinociceptive effect was evaluated in the hot plate model. Phytochemical analyses indicated the presence of geranial, limonene, γ-terpinene and others. EOs from C. limon, C. aurantifolia and C. limonia exhibited anti-inflammatory effects by reducing cell migration, cytokine production and protein extravasation induced by carrageenan. These effects were also obtained with similar amounts of pure limonene. It was also observed that C. aurantifolia induced myelotoxicity in mice. Anti-inflammatory effect of C. limon and C. limonia is probably due to their large quantities of limonene, while the myelotoxicity observed with C. aurantifolia is most likely due to the high concentration of citral. Our results indicate that these EOs from C. limon, C. aurantifolia and C. limonia have a significant anti-inflammatory effect; however, care should be taken with C. aurantifolia.", "title": "" }, { "docid": "neg:1840266_17", "text": "This paper reports, to our knowledge, the first spherical induction motor (SIM) operating with closed loop control. The motor can produce up to 4 Nm of torque along arbitrary axes with continuous speeds up to 300 rpm. The motor's rotor is a two-layer copper-over-iron spherical shell. The stator has four independent inductors that generate thrust forces on the rotor surface. The motor is also equipped with four optical mouse sensors that measure surface velocity to estimate the rotor's angular velocity, which is used for vector control of the inductors and control of angular velocity and orientation. Design considerations including torque distribution for the inductors, angular velocity sensing, angular velocity control, and orientation control are presented. Experimental results show accurate tracking of velocity and orientation commands.", "title": "" }, { "docid": "neg:1840266_18", "text": "We report on an automated runtime anomaly detection method at the application layer of multi-node computer systems. Although several network management systems are available in the market, none of them have sufficient capabilities to detect faults in multi-tier Web-based systems with redundancy. We model a Web-based system as a weighted graph, where each node represents a \"service\" and each edge represents a dependency between services. Since the edge weights vary greatly over time, the problem we address is that of anomaly detection from a time sequence of graphs.In our method, we first extract a feature vector from the adjacency matrix that represents the activities of all of the services. The heart of our method is to use the principal eigenvector of the eigenclusters of the graph. Then we derive a probability distribution for an anomaly measure defined for a time-series of directional data derived from the graph sequence. Given a critical probability, the threshold value is adaptively updated using a novel online algorithm.We demonstrate that a fault in a Web application can be automatically detected and the faulty services are identified without using detailed knowledge of the behavior of the system.", "title": "" }, { "docid": "neg:1840266_19", "text": "We present a novel sketch-based system for the interactive modeling of a variety of free-form 3D objects using just a few strokes. Our technique is inspired by the traditional illustration strategy for depicting 3D forms where the basic geometric forms of the subjects are identified, sketched and progressively refined using few key strokes. We introduce two parametric surfaces, rotational and cross sectional blending, that are inspired by this illustration technique. We also describe orthogonal deformation and cross sectional oversketching as editing tools to complement our modeling techniques. Examples with models ranging from cartoon style to botanical illustration demonstrate the capabilities of our system.", "title": "" } ]
1840267
A Practical Wireless Attack on the Connected Car and Security Protocol for In-Vehicle CAN
[ { "docid": "pos:1840267_0", "text": "Modern intelligent vehicles have electronic control units containing firmware that enables various functions in the vehicle. New firmware versions are constantly developed to remove bugs and improve functionality. Automobile manufacturers have traditionally performed firmware updates over cables but in the near future they are aiming at conducting firmware updates over the air, which would allow faster updates and improved safety for the driver. In this paper, we present a protocol for secure firmware updates over the air. The protocol provides data integrity, data authentication, data confidentiality, and freshness. In our protocol, a hash chain is created of the firmware, and the first packet is signed by a trusted source, thus authenticating the whole chain. Moreover, the packets are encrypted using symmetric keys. We discuss the practical considerations that exist for implementing our protocol and show that the protocol is computationally efficient, has low memory overhead, and is suitable for wireless communication. Therefore, it is well suited to the limited hardware resources in the wireless vehicle environment.", "title": "" } ]
[ { "docid": "neg:1840267_0", "text": "In this paper, a novel method for lung nodule detection, segmentation and recognition using computed tomography (CT) images is presented. Our contribution consists of several steps. First, the lung area is segmented by active contour modeling followed by some masking techniques to transfer non-isolated nodules into isolated ones. Then, nodules are detected by the support vector machine (SVM) classifier using efficient 2D stochastic and 3D anatomical features. Contours of detected nodules are then extracted by active contour modeling. In this step all solid and cavitary nodules are accurately segmented. Finally, lung tissues are classified into four classes: namely lung wall, parenchyma, bronchioles and nodules. This classification helps us to distinguish a nodule connected to the lung wall and/or bronchioles (attached nodule) from the one covered by parenchyma (solitary nodule). At the end, performance of our proposed method is examined and compared with other efficient methods through experiments using clinical CT images and two groups of public datasets from Lung Image Database Consortium (LIDC) and ANODE09. Solid, non-solid and cavitary nodules are detected with an overall detection rate of 89%; the number of false positive is 7.3/scan and the location of all detected nodules are recognized correctly.", "title": "" }, { "docid": "neg:1840267_1", "text": "The MOND limit is shown to follow from a requirement of space-time scale invariance of the equations of motion for nonrelativistic, purely gravitational systems; i.e., invariance of the equations of motion under (t, r) → (λt, λr) in the limit a 0 → ∞. It is suggested that this should replace the definition of the MOND limit based on the asymptotic behavior of a Newtonian-MOND interpolating function. In this way, the salient, deep-MOND results–asymptotically flat rotation curves, the mass-rotational-speed relation (baryonic Tully-Fisher relation), the Faber-Jackson relation, etc.–follow from a symmetry principle. For example, asymptotic flatness of rotation curves reflects the fact that radii change under scaling, while velocities do not. I then comment on the interpretation of the deep-MOND limit as one of \" zero mass \" : Rest masses, whose presence obstructs scaling symmetry, become negligible compared to the \" phantom \" , dynamical masses–those that some would attribute to dark matter. Unlike the former masses, the latter transform in a way that is consistent with the symmetry. Finally, I discuss the putative MOND-cosmology connection, in particular the possibility that MOND-especially the deep-MOND limit– is related to the asymptotic de Sitter geometry of our universe. I point out, in this connection, the possible relevance of a (classical) de Sitter-conformal-field-theory (dS/CFT) correspondence.", "title": "" }, { "docid": "neg:1840267_2", "text": "The ability of robotic systems to autonomously understand and/or navigate in uncertain environments is critically dependent on fairly accurate strategies, which are not always optimally achieved due to effectiveness, computational cost, and parameter settings. In this paper, we propose a novel and simple adaptive strategy to increase the efficiency and drastically reduce the computational effort in particle filters (PFs). The purpose of the adaptive approach (dispersion-based adaptive particle filter - DAPF) is to provide higher number of particles during the initial searching state (when the localization presents greater uncertainty) and fewer particles during the subsequent state (when the localization exhibits less uncertainty). With the aim of studying the dynamical PF behavior regarding others and putting the proposed algorithm into practice, we designed a methodology based on different target applications and a Kinect sensor. The various experiments conducted for both color tracking and mobile robot localization problems served to demonstrate that the DAPF algorithm can be further generalized. As a result, the DAPF approach significantly improved the computational performance over two well-known filtering strategies: 1) the classical PF with fixed particle set sizes, and 2) the adaptive technique named Kullback-Leiber distance.", "title": "" }, { "docid": "neg:1840267_3", "text": "Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better researchmethodologies, developing basic social touch building blocks, and solving specific ICT challenges.", "title": "" }, { "docid": "neg:1840267_4", "text": "This paper describes and compares two straightforward approaches for dependency parsing with partial annotations (PA). The first approach is based on a forest-based training objective for two CRF parsers, i.e., a biaffine neural network graph-based parser (Biaffine) and a traditional log-linear graph-based parser (LLGPar). The second approach is based on the idea of constrained decoding for three parsers, i.e., a traditional linear graphbased parser (LGPar), a globally normalized neural network transition-based parser (GN3Par) and a traditional linear transition-based parser (LTPar). For the test phase, constrained decoding is also used for completing partial trees. We conduct experiments on Penn Treebank under three different settings for simulating PA, i.e., random, most uncertain, and divergent outputs from the five parsers. The results show that LLGPar is most effective in directly learning from PA, and other parsers can achieve best performance when PAs are completed into full trees by LLGPar.", "title": "" }, { "docid": "neg:1840267_5", "text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.", "title": "" }, { "docid": "neg:1840267_6", "text": "In this tutorial paper, we present a general architecture for digital clock and data recovery (CDR) for high-speed binary links. The architecture is based on replacing the analog loop filter and voltage-controlled oscillator (VCO) in a typical analog phase-locked loop (PLL)-based CDR with digital components. We provide a linearized analysis of the bang-bang phase detector and CDR loop including the effects of decimation and self-noise. Additionally, we provide measured results from an implementation of the digital CDR system which are directly comparable to the linearized analysis, plus measurements of the limit cycle behavior which arises in these loops when incoming jitter is small. Finally, the relative advantages of analog and digital implementations of the CDR for high-speed binary links is considered", "title": "" }, { "docid": "neg:1840267_7", "text": "The generally accepted assumption by most multimedia researchers is that learning is inhibited when on-screen text and narration containing the same information is presented simultaneously, rather than on-screen text or narration alone. This is known as the verbal redundancy effect. Are there situations where the reverse is true? This research was designed to investigate the reverse redundancy effect for non-native English speakers learning English reading comprehension, where two instructional modes were used the redundant mode and the modality mode. In the redundant mode, static pictures and audio narration were presented with synchronized redundant on-screen text. In the modality mode, only static pictures and audio were presented. In both modes, learners were allowed to control the pacing of the lessons. Participants were 209 Yemeni learners in their first year of tertiary education. Examination of text comprehension scores indicated that those learners who were exposed to the redundancy mode performed significantly better than learners in the modality mode. They were also significantly more motivated than their counterparts in the modality mode. This finding has added an important modification to the redundancy effect. That is the reverse redundancy effect is true for multimedia learning of English as a foreign language for students where textual information was foreign to them. In such situations, the redundant synchronized on-screen text did not impede learning; rather it reduced the cognitive load and thereby enhanced learning.", "title": "" }, { "docid": "neg:1840267_8", "text": "Cloud computing can reduce mainframe management costs, so more and more users choose to build their own cloud hosting environment. In cloud computing, all the commands through the network connection, therefore, information security is particularly important. In this paper, we will explore the types of intrusion detection systems, and integration of these types, provided an effective and output reports, so system administrators can understand the attacks and damage quickly. With the popularity of cloud computing, intrusion detection system log files are also increasing rapidly, the effect is limited and inefficient by using the conventional analysis system. In this paper, we use Hadoop's MapReduce algorithm analysis of intrusion detection System log files, the experimental results also confirmed that the calculation speed can be increased by about 89%. For the system administrator, IDS Log Cloud Analysis System (called ICAS) can provide fast and high reliability of the system.", "title": "" }, { "docid": "neg:1840267_9", "text": "In image deblurring, a fundamental problem is that the blur kernel suppresses a number of spatial frequencies that are difficult to recover reliably. In this paper, we explore the potential of a class-specific image prior for recovering spatial frequencies attenuated by the blurring process. Specifically, we devise a prior based on the class-specific subspace of image intensity responses to band-pass filters. We learn that the aggregation of these subspaces across all frequency bands serves as a good class-specific prior for the restoration of frequencies that cannot be recovered with generic image priors. In an extensive validation, our method, equipped with the above prior, yields greater image quality than many state-of-the-art methods by up to 5 dB in terms of image PSNR, across various image categories including portraits, cars, cats, pedestrians and household objects.", "title": "" }, { "docid": "neg:1840267_10", "text": "This is a critical design paper offering a possible scenario of use intended to provoke reflection about values and politics of design in persuasive computing. We describe the design of a system - Fit4Life - that encourages individuals to address the larger goal of reducing obesity in society by promoting individual healthy behaviors. Using the Persuasive Systems Design Model [26], this paper outlines the Fit4Life persuasion context, the technology, its use of persuasive messages, and an experimental design to test the system's efficacy. We also contribute a novel discussion of the ethical and sociocultural considerations involved in our design, an issue that has remained largely unaddressed in the existing persuasive technologies literature [29].", "title": "" }, { "docid": "neg:1840267_11", "text": "The origin and continuation of mankind is based on water. Water is one of the most abundant resources on earth, covering three-fourths of the planet’s surface. However, about 97% of the earth’s water is salt water in the oceans, and a tiny 3% is fresh water. This small percentage of the earth’s water—which supplies most of human and animal needs—exists in ground water, lakes and rivers. The only nearly inexhaustible sources of water are the oceans, which, however, are of high salinity. It would be feasible to address the water-shortage problem with seawater desalination; however, the separation of salts from seawater requires large amounts of energy which, when produced from fossil fuels, can cause harm to the environment. Therefore, there is a need to employ environmentally-friendly energy sources in order to desalinate seawater. After a historical introduction into desalination, this paper covers a large variety of systems used to convert seawater into fresh water suitable for human use. It also covers a variety of systems, which can be used to harness renewable energy sources; these include solar collectors, photovoltaics, solar ponds and geothermal energy. Both direct and indirect collection systems are included. The representative example of direct collection systems is the solar still. Indirect collection systems employ two subsystems; one for the collection of renewable energy and one for desalination. For this purpose, standard renewable energy and desalination systems are most often employed. Only industrially-tested desalination systems are included in this paper and they comprise the phase change processes, which include the multistage flash, multiple effect boiling and vapour compression and membrane processes, which include reverse osmosis and electrodialysis. The paper also includes a review of various systems that use renewable energy sources for desalination. Finally, some general guidelines are given for selection of desalination and renewable energy systems and the parameters that need to be considered. q 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840267_12", "text": "The analysis of time series data is of interest to many application domains. But this analysis is challenging due to many reasons such as missing data in the series, unstructured nature of the data and errors in the data collection procedure, measuring equipment, etc. The problem of missing data while matching two time series is dealt with either by predicting a value for the missing data using the already collected data, or by completely ignoring the missing values. In this paper, we present an approach where we make use of the characteristics of the Mahalanobis Distance to inherently accommodate the missing values while finding the best match between two time series. Using this approach, we have designed two algorithms which can find the best match for a given query series in a candidate series, without imputing the missing values in the candidate. The initial algorithm finds the best nonwarped match between the candidate and the query time series, while the second algorithm is an extension of the initial algorithm to find the best match in the case of warped data using a Dynamic Time Warping (DTW) like algorithm. Thus, with experimental results we go on to conclude that the proposed warping algorithm is a good method for matching between two time series with warping and missing data.", "title": "" }, { "docid": "neg:1840267_13", "text": "BACKGROUND Asthma is the most common chronic pulmonary disease during pregnancy. Several previous reports have documented reversible electrocardiographic changes during severe acute asthma attacks, including tachycardia, P pulmonale, right bundle branch block, right axis deviation, and ST segment and T wave abnormalities. CASE REPORT We present the case of a pregnant patient with asthma exacerbation in which acute bronchospasm caused S1Q3T3 abnormality on an electrocardiogram (ECG). The complete workup of ECG findings of S1Q3T3 was negative and correlated with bronchospasm. The S1Q3T3 electrocardiographic abnormality can be seen in acute bronchospasm in pregnant women. The other causes like pulmonary embolism, pneumothorax, acute lung disease, cor pulmonale, and left posterior fascicular block were excluded. CONCLUSIONS Asthma exacerbations are of considerable concern during pregnancy due to their adverse effect on the fetus, and optimization of asthma treatment during pregnancy is vital for achieving good outcomes. Prompt recognition of electrocardiographic abnormality and early treatment can prevent adverse perinatal outcomes.", "title": "" }, { "docid": "neg:1840267_14", "text": "We study the design and optimization of polyhedral patterns, which are patterns of planar polygonal faces on freeform surfaces. Working with polyhedral patterns is desirable in architectural geometry and industrial design. However, the classical tiling patterns on the plane must take on various shapes in order to faithfully and feasibly approximate curved surfaces. We define and analyze the deformations these tiles must undertake to account for curvature, and discover the symmetries that remain invariant under such deformations. We propose a novel method to regularize polyhedral patterns while maintaining these symmetries into a plethora of aesthetic and feasible patterns.", "title": "" }, { "docid": "neg:1840267_15", "text": "This paper overviews NTCIR-13 Actionable Knowledge Graph (AKG) task. The task focuses on finding possible actions related to input entities and the relevant properties of such actions. AKG is composed of two subtasks: Action Mining (AM) and Actionable Knowledge Graph Generation (AKGG). Both subtasks are focused on English language. 9 runs have been submitted by 4 teams for the task. In this paper we describe both the subtasks, datasets, evaluation methods and the results of meta analyses.", "title": "" }, { "docid": "neg:1840267_16", "text": "INTRODUCTION In Britain today, children by the age of 10 years have regular access to an average of five different screens at home. In addition to the main family television, for example, many very young children have their own bedroom TV along with portable handheld computer game consoles (eg, Nintendo, Playstation, Xbox), smartphone with games, internet and video, a family computer and a laptop and/or a tablet computer (eg, iPad). Children routinely engage in two or more forms of screen viewing at the same time, such as TV and laptop. Viewing is starting earlier in life. Nearly one in three American infants has a TV in their bedroom, and almost half of all infants watch TV or DVDs for nearly 2 h/day. Across the industrialised world, watching screen media is the main pastime of children. Over the course of childhood, children spend more time watching TV than they spend in school. When including computer games, internet and DVDs, by the age of seven years, a child born today will have spent one full year of 24 h days watching screen media. By the age of 18 years, the average European child will have spent 3 years of 24 h days watching screen media; at this rate, by the age of 80 years, they will have spent 17.6 years glued to media screens. Yet, irrespective of the content or educational value of what is being viewed, the sheer amount of average daily screen time (ST) during discretionary hours after school is increasingly being considered an independent risk factor for disease, and is recognised as such by other governments and medical bodies but not, however, in Britain or in most of the EU. To date, views of the British and European medical establishments on increasingly high levels of child ST remain conspicuous by their absence. This paper will highlight the dramatic increase in the time children today spend watching screen media. It will provide a brief overview of some specific health and well-being concerns of current viewing levels, explain why screen viewing is distinct from other forms of sedentary behaviour, and point to the potential public health benefits of a reduction in ST. It is proposed that Britain and Europe’s medical establishments now offer guidance on the average number of hours per day children spend viewing screen media, and the age at which they start.", "title": "" }, { "docid": "neg:1840267_17", "text": "The purpose of the paper is to investigate the design of rectangular patch antenna arrays fed by miscrostrip and coaxial lines at 28 GHz for future 5G applications. Our objective is to design a four element antenna array with a bandwidth higher than 1 GHz and a maximum radiation gain. The performances of the rectangular 4∗1 and 2∗2 patch antenna arrays designed on Rogers RT/Duroid 5880 substrate were optimized and the simulation results reveal that the performance of 4∗1 antenna array fed by microstrip line is better than 2∗2 antenna array fed by coaxial cable. We obtained for the topology of 4∗1 rectangular patch array antenna a bandwidth of 2.15 GHz and 1.3 GHz respectively with almost similar gains of the order of 13.3 dBi.", "title": "" }, { "docid": "neg:1840267_18", "text": "A lthough ordinary least-squares (OLS) regression is one of the most familiar statistical tools, far less has been written − especially in the pedagogical literature − on regression through the origin (RTO). Indeed, the subject is surprisingly controversial. The present note highlights situations in which RTO is appropriate, discusses the implementation and evaluation of such models and compares RTO functions among three popular statistical packages. Some examples gleaned from past Teaching Statistics articles are used as illustrations. For expository convenience, OLS and RTO refer here to linear regressions obtained by least-squares methods with and without a constant term, respectively.", "title": "" }, { "docid": "neg:1840267_19", "text": "RICHARD M. FELDER and JONI SPURLIN North Carolina State University, Raleigh, North Carolina 27695±7905, USA. E-mail: rmfelder@mindspring.com The Index of Learning Styles (ILS) is an instrument designed to assess preferences on the four dimensions of the Felder-Silverman learning style model. The Web-based version of the ILS is taken hundreds of thousands of times per year and has been used in a number of published studies, some of which include data reflecting on the reliability and validity of the instrument. This paper seeks to provide the first comprehensive examination of the ILS, including answers to several questions: (1) What are the dimensions and underlying assumptions of the model upon which the ILS is based? (2) How should the ILS be used and what misuses should be avoided? (3) What research studies have been conducted using the ILS and what conclusions regarding its reliability and validity may be inferred from the data?", "title": "" } ]
1840268
Perception , Planning , and Execution for Mobile Manipulation in Unstructured Environments
[ { "docid": "pos:1840268_0", "text": "We present a complete software architecture for reliable grasping of household objects. Our work combines aspects such as scene interpretation from 3D range data, grasp planning, motion planning, and grasp failure identification and recovery using tactile sensors. We build upon, and add several new contributions to the significant prior work in these areas. A salient feature of our work is the tight coupling between perception (both visual and tactile) and manipulation, aiming to address the uncertainty due to sensor and execution errors. This integration effort has revealed new challenges, some of which can be addressed through system and software engineering, and some of which present opportunities for future research. Our approach is aimed at typical indoor environments, and is validated by long running experiments where the PR2 robotic platform was able to consistently grasp a large variety of known and unknown objects. The set of tools and algorithms for object grasping presented here have been integrated into the open-source Robot Operating System (ROS).", "title": "" } ]
[ { "docid": "neg:1840268_0", "text": "Generative adversarial nets (GANs) have been successfully applied to the artificial generation of image data. In terms of text data, much has been done on the artificial generation of natural language from a single corpus. We consider multiple text corpora as the input data, for which there can be two applications of GANs: (1) the creation of consistent cross-corpus word embeddings given different word embeddings per corpus; (2) the generation of robust bag-of-words document embeddings for each corpora. We demonstrate our GAN models on real-world text data sets from different corpora, and show that embeddings from both models lead to improvements in supervised learning problems.", "title": "" }, { "docid": "neg:1840268_1", "text": "Synthetic aperture radar automatic target recognition (SAR-ATR) has made great progress in recent years. Most of the established recognition methods are supervised, which have strong dependence on image labels. However, obtaining the labels of radar images is expensive and time-consuming. In this paper, we present a semi-supervised learning method that is based on the standard deep convolutional generative adversarial networks (DCGANs). We double the discriminator that is used in DCGANs and utilize the two discriminators for joint training. In this process, we introduce a noisy data learning theory to reduce the negative impact of the incorrectly labeled samples on the performance of the networks. We replace the last layer of the classic discriminators with the standard softmax function to output a vector of class probabilities so that we can recognize multiple objects. We subsequently modify the loss function in order to adapt to the revised network structure. In our model, the two discriminators share the same generator, and we take the average value of them when computing the loss function of the generator, which can improve the training stability of DCGANs to some extent. We also utilize images of higher quality from the generated images for training in order to improve the performance of the networks. Our method has achieved state-of-the-art results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset, and we have proved that using the generated images to train the networks can improve the recognition accuracy with a small number of labeled samples.", "title": "" }, { "docid": "neg:1840268_2", "text": "In present investigation, two glucose based smart tumor-targeted drug delivery systems coupled with enzyme-sensitive release strategy are introduced. Magnetic nanoparticles (Fe3O4) were grafted with carboxymethyl chitosan (CS) and β-cyclodextrin (β-CD) as carriers. Prodigiosin (PG) was used as the model anti-tumor drug, targeting aggressive tumor cells. The morphology, properties and composition and grafting process were characterized by transmission electron microscope (TEM), Fourier transform infrared spectroscopy (FT-IR), vibration sample magnetometer (VSM), X-ray diffraction (XRD) analysis. The results revealed that the core crystal size of the nanoparticles synthesized were 14.2±2.1 and 9.8±1.4nm for β-CD and CS-MNPs respectively when measured using TEM; while dynamic light scattering (DLS) gave diameters of 121.1 and 38.2nm. The saturation magnetization (Ms) of bare magnetic nanoparticles is 50.10emucm-3, while modification with β-CD and CS gave values of 37.48 and 65.01emucm-3, respectively. The anticancer compound, prodigiosin (PG) was loaded into the NPs with an encapsulation efficiency of approximately 81% for the β-CD-MNPs, and 92% for the CS-MNPs. This translates to a drug loading capacity of 56.17 and 59.17mg/100mg MNPs, respectively. Measurement of in vitro release of prodigiosin from the loaded nanocarriers in the presence of the hydrolytic enzymes, alpha-amylase and chitosanase showed that 58.1 and 44.6% of the drug was released after one-hour of incubation. Cytotoxicity studies of PG-loaded nanocarriers on two cancer cell lines, MCF-7 and HepG2, and on a non-cancerous control, NIH/3T3 cells, revealed that the drug loaded nanoparticles had greater efficacy on the cancer cell lines. The selective index (SI) for free PG on MCF-7 and HepG2 cells was 1.54 and 4.42 respectively. This parameter was reduced for PG-loaded β-CD-MNPs to 1.27 and 1.85, while the SI for CS-MNPs improved considerably to 7.03 on MCF-7 cells. Complementary studies by fluorescence and confocal microscopy and flow cytometry confirm specific targeting of the nanocarriers to the cancer cells. The results suggest that CS-MNPs have higher potency and are better able to target the prodigiosin toxicity effect on cancerous cells than β-CD-MNPs.", "title": "" }, { "docid": "neg:1840268_3", "text": "In this study we describe a new approach to relate simulator sickness ratings with the main frequency component of the simulator motion mismatch, that is, the computed difference between the time histories of simulator motion and vehicle motion, respectively. During two driving simulator experiments in the TNO moving-base driving simulatorthat were performed for other reasons than the purpose of this studywe collected simulator sickness questionnaires from in total 58 subjects. The main frequency component was computed by means of the power spectrum density of the computed mismatch signal. We hypothesized that simulator sickness incidence depends on this frequency component, in a similar way as the incidence of real motion sickness, such as sea sickness, depends on motion frequency. The results show that the simulator sickness ratings differed between both driving simulator experiments. The experiment with its main frequency component of the mismatch signal of 0.08 Hz had significantly higher simulator sickness incidence than the experiment with its main frequency at 0.46 Hz. Since the experimental design differed between both experiments, we cannot exclusively attribute the difference in sickness ratings to the frequency component, but the observation does suggest that quantitative analysis of the mismatch between the motion profiles of the simulator and the vehicle may greatly improve our understanding of the causal mechanism of simulator sickness.", "title": "" }, { "docid": "neg:1840268_4", "text": "This paper reports on a mixed-method study in progress. The qualitative part has been completed and the quantitative part is underway. The findings of the qualitative study -- the theory of Integral Decision-Making (IDM) -- are introduced, and the research method to test IDM is discussed. It is expected that the integration of the qualitative and quantitative studies will provide insight into how data, information, and knowledge capacities can lead to more effective management decisions by incorporating more human inputs in the decision-and policy-making process. Implications for theory and practice will be suggested.", "title": "" }, { "docid": "neg:1840268_5", "text": "Emerging low-power radio triggering techniques for wireless motes are a promising approach to prolong the lifetime of Wireless Sensor Networks (WSNs). By allowing nodes to activate their main transceiver only when data need to be transmitted or received, wake-up-enabled solutions virtually eliminate the need for idle listening, thus drastically reducing the energy toll of communication. In this paper we describe the design of a novel wake-up receiver architecture based on an innovative pass-band filter bank with high selectivity capability. The proposed concept, demonstrated by a prototype implementation, combines both frequency-domain and time-domain addressing space to allow selective addressing of nodes. To take advantage of the functionalities of the proposed receiver, as well as of energy-harvesting capabilities modern sensor nodes are equipped with, we present a novel wake-up-enabled harvesting-aware communication stack that supports both interest dissemination and converge casting primitives. This stack builds on the ability of the proposed WuR to support dynamic address assignment, which is exploited to optimize system performance. Comparison against traditional WSN protocols shows that the proposed concept allows to optimize performance tradeoffs with respect to existing low-power communication stacks.", "title": "" }, { "docid": "neg:1840268_6", "text": "In this paper, a bidirectional converter with a uniform controller for Vehicle to grid (V2G) application is designed. The bidirectional converter consists of two stages one is ac-dc converter and second is dc-dc converter. For ac-dc converter bipolar modulation is used. Two separate controller systems are designed for converters which follow active and reactive power commands from grid. Uniform controller provides reactive power support to the grid. The charger operates in two quadrants I and IV. There are three modes of operation viz. charging only operation, charging-capacitive operation and charging-inductive operation. During operation under these three operating modes vehicle's battery is not affected. The whole system is tested using MATLAB/SIMULINK.", "title": "" }, { "docid": "neg:1840268_7", "text": "The adoption of the General Data Protection Regulation (GDPR) is a major concern for data controllers of the public and private sector, as they are obliged to conform to the new principles and requirements managing personal data. In this paper, we propose that the data controllers adopt the concept of the Privacy Level Agreement. We present a metamodel for PLAs to support privacy management, based on analysis of privacy threats, vulnerabilities and trust relationships in their Information Systems, whilst complying with laws and regulations, and we illustrate the relevance of the metamodel with the GDPR.", "title": "" }, { "docid": "neg:1840268_8", "text": "Projective analysis is an important solution in three-dimensional (3D) shape retrieval, since human visual perceptions of 3D shapes rely on various 2D observations from different viewpoints. Although multiple informative and discriminative views are utilized, most projection-based retrieval systems suffer from heavy computational cost, and thus cannot satisfy the basic requirement of scalability for search engines. In the past three years, shape retrieval contest (SHREC) pays much attention to the scalability of 3D shape retrieval algorithms, and organizes several large scale tracks accordingly [1]– [3]. However, the experimental results indicate that conventional algorithms cannot be directly applied to large datasets. In this paper, we present a real-time 3D shape search engine based on the projective images of 3D shapes. The real-time property of our search engine results from the following aspects: 1) efficient projection and view feature extraction using GPU acceleration; 2) the first inverted file, called F-IF, is utilized to speed up the procedure of multiview matching; and 3) the second inverted file, which captures a local distribution of 3D shapes in the feature manifold, is adopted for efficient context-based reranking. As a result, for each query the retrieval task can be finished within one second despite the necessary cost of IO overhead. We name the proposed 3D shape search engine, which combines GPU acceleration and inverted file (t wice), as GIFT. Besides its high efficiency, GIFT also outperforms state-of-the-art methods significantly in retrieval accuracy on various shape benchmarks (ModelNet40 dataset, ModelNet10 dataset, PSB dataset, McGill dataset) and competitions (SHREC14LSGTB, ShapeNet Core55, WM-SHREC07).", "title": "" }, { "docid": "neg:1840268_9", "text": "This article provides a brief overview of several classes of fiber reinforced cement based composites and suggests future directions in FRC development. Special focus is placed on micromechanics based design methodology of strain-hardening cement based composites. As example, a particular engineered cementitious composite newly developed at the ACE-MRL at the University of Michigan is described in detail with regard to its design, material composition, processing, and mechanical properties. Three potential applications which utilize the unique properties of such composites are cited in this paper, and future research needs are identified. * To appear in Fiber Reinforced Concrete: Present and the Future, Eds: N. Banthia, A. Bentur, and A. Mufti, Canadian Society of Civil Engineers, 1997.", "title": "" }, { "docid": "neg:1840268_10", "text": "Precision-Recall analysis abounds in applications of binary classification where true negatives do not add value and hence should not affect assessment of the classifier’s performance. Perhaps inspired by the many advantages of receiver operating characteristic (ROC) curves and the area under such curves for accuracybased performance assessment, many researchers have taken to report PrecisionRecall (PR) curves and associated areas as performance metric. We demonstrate in this paper that this practice is fraught with difficulties, mainly because of incoherent scale assumptions – e.g., the area under a PR curve takes the arithmetic mean of precision values whereas the Fβ score applies the harmonic mean. We show how to fix this by plotting PR curves in a different coordinate system, and demonstrate that the new Precision-Recall-Gain curves inherit all key advantages of ROC curves. In particular, the area under Precision-Recall-Gain curves conveys an expected F1 score on a harmonic scale, and the convex hull of a PrecisionRecall-Gain curve allows us to calibrate the classifier’s scores so as to determine, for each operating point on the convex hull, the interval of β values for which the point optimises Fβ . We demonstrate experimentally that the area under traditional PR curves can easily favour models with lower expected F1 score than others, and so the use of Precision-Recall-Gain curves will result in better model selection.", "title": "" }, { "docid": "neg:1840268_11", "text": "We have developed an interactive pop-up book called Electronic Popables to explore paper-based computing. Our book integrates traditional pop-up mechanisms with thin, flexible, paper-based electronics and the result is an artifact that looks and functions much like an ordinary pop-up, but has added elements of dynamic interactivity. This paper introduces the book and, through it, a library of paper-based sensors and a suite of paper-electronics construction techniques. We also reflect on the unique and under-explored opportunities that arise from combining material experimentation, artistic design, and engineering.", "title": "" }, { "docid": "neg:1840268_12", "text": "The availability in machine-readable form of descriptions of the structure of documents, as well as of the document discourse (e.g. the scientific discourse within scholarly articles), is crucial for facilitating semantic publishing and the overall comprehension of documents by both users and machines. In this paper we introduce DoCO, the Document Components Ontology, an OWL 2 DL ontology that provides a general-purpose structured vocabulary of document elements to describe both structural and rhetorical document components in RDF. In addition to describing the formal description of the ontology, this paper showcases its utility in practice in a variety of our own applications and other activities of the Semantic Publishing community that rely on DoCO to annotate and retrieve document components of scholarly articles.", "title": "" }, { "docid": "neg:1840268_13", "text": "As object-oriented class libraries evolve, classes are occasionally deprecated in favor of others with roughly the same functionality. In Java's standard libraries, for example, class Hashtable has been superseded by HashMap, and Iterator is now preferred over Enumeration. Migrating client applications to use the new idioms is often desirable, but making the required changes to declarations and allocation sites can be quite labor-intensive. Moreover, migration becomes complicated---and sometimes impossible---if an application interacts with external components, if a legacy class is not completely equivalent to its replacement, or if multiple interdependent classes must be migrated simultaneously. We present an approach in which mappings between legacy classes and their replacements are specified by the programmer. Then, an analysis based on type constraints determines where declarations and allocation sites can be updated. The method was implemented in Eclipse, and evaluated on a number of Java applications. On average, our tool could migrate more than 90% of the references to legacy classes.", "title": "" }, { "docid": "neg:1840268_14", "text": "Neural Networks are very successful in acquiring hidden knowledge in datasets. Their most important weakness is that the knowled ge they acquire is represented in a form not understandable to humans. Understandability problem of Neural Networks can be solved by extracti ng Decision Rules or Decision Trees from the trained network. There are s everal Decision Rule extraction methods and Mark Craven’s TREPAN which extracts MofN type Decision Trees from trained networks. We introduced new splitting techniques for extracting classical Decision Trees fr om trained Neural Networks. We showed that the new method (DecText) is effecti ve in extracting high fidelity trees from trained networks. We also introduced a new discretization technique to make DecText be able to hand le continuous features and a new pruning technique for finding simplest tree with the highest fidelity.", "title": "" }, { "docid": "neg:1840268_15", "text": "The kinematics of contact describe the motion of a point of contact over the surfaces of two contacting objects in response to a relative motion of these objects. Using concepts from differential geometry, I derive a set of equations, called the contact equations, that embody this relationship. I employ the contact equations to design the following applications to be executed by an end-effector with tactile sensing capability: (1) determining the curvature form of an unknown object at a point of contact; and (2) following the surface of an unknown object. The contact equations also serve as a basis for an investigation of the kinematics of grasp. I derive the relationship between the relative motion of two fingers grasping an object and the motion of the points of contact over the object surface. Based on this analysis, we explore the following applications: (1) rolling a sphere between two arbitrarily shaped fingers ; (2) fine grip adjustment (i.e., having two fingers that grasp an unknown object locally optimize their grip for maximum stability ).", "title": "" }, { "docid": "neg:1840268_16", "text": "In this paper, the shielded coil structure using the ferrites and the metallic shielding is proposed. It is compared with the unshielded coil structure (i.e. a pair of circular loop coils only) to demonstrate the differences in the magnetic field distributions and system performance. The simulation results using the 3D Finite Element Analysis (FEA) tool show that it can considerably suppress the leakage magnetic field from 100W-class wireless power transfer (WPT) system with the enhanced system performance.", "title": "" }, { "docid": "neg:1840268_17", "text": "We review the literature on pathological narcissism and narcissistic personality disorder (NPD) and describe a significant criterion problem related to four inconsistencies in phenotypic descriptions and taxonomic models across clinical theory, research, and practice; psychiatric diagnosis; and social/personality psychology. This impedes scientific synthesis, weakens narcissism's nomological net, and contributes to a discrepancy between low prevalence rates of NPD and higher rates of practitioner-diagnosed pathological narcissism, along with an enormous clinical literature on narcissistic disturbances. Criterion issues must be resolved, including clarification of the nature of normal and pathological narcissism, incorporation of the two broad phenotypic themes of narcissistic grandiosity and narcissistic vulnerability into revised diagnostic criteria and assessment instruments, elimination of references to overt and covert narcissism that reify these modes of expression as distinct narcissistic types, and determination of the appropriate structure for pathological narcissism. Implications for the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders and the science of personality disorders are presented.", "title": "" }, { "docid": "neg:1840268_18", "text": "Based on land use and land cover (LULC) datasets in the late 1970s, the early 1990s, 2004 and 2012, we analyzed characteristics of LULC change in the headwaters of the Yangtze River and Yellow River over the past 30 years contrastively, using the transition matrix and LULC change index. The results showed that, in 2012, the LULC in the headwaters of the Yellow River were different compared to those of the headwaters of the Yangtze River, with more grassland and wetand marshland. In the past 30 years, the grassland and wetand marshland increasing at the expense of sand, gobi, and bare land and desert were the main LULC change types in the headwaters of the Yangtze River, with the macro-ecological situation experiencing a process of degeneration, slight melioration, and continuous melioration, in that order. In the headwaters of the Yellow River, severe reduction of grassland coverage, shrinkage of wetand marshland and the consequential expansion of sand, gobi and bare land were noticed. The macro-ecological situation experienced a process of degeneration, obvious degeneration, and slight melioration, in that order, and the overall change in magnitude was more dramatic than that in the headwaters of the Yangtze River. These different LULC change courses were jointly driven by climate change, grassland-grazing pressure, and the implementation of ecological construction projects.", "title": "" }, { "docid": "neg:1840268_19", "text": "Forecasting of time series that have seasonal and other variations remains an important problem for forecasters. This paper presents a neural network (NN) approach to forecasting quarterly time series. With a large data set of 756 quarterly time series from the M3 forecasting competition, we conduct a comprehensive investigation of the effectiveness of several data preprocessing and modeling approaches. We consider two data preprocessing methods and 48 NN models with different possible combinations of lagged observations, seasonal dummy variables, trigonometric variables, and time index as inputs to the NN. Both parametric and nonparametric statistical analyses are performed to identify the best models under different circumstances and categorize similar models. Results indicate that simpler models, in general, outperform more complex models. In addition, data preprocessing especially with deseasonalization and detrending is very helpful in improving NN performance. Practical guidelines are also provided.", "title": "" } ]
1840269
ARGUS: An Automated Multi-Agent Visitor Identification System
[ { "docid": "pos:1840269_0", "text": "This paper surveys locally weighted learning, a form of lazy learning and memory-based learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning fit parameters, interference between old and new data, implementing locally weighted learning efficiently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control.", "title": "" } ]
[ { "docid": "neg:1840269_0", "text": "Open cross-section, thin-walled, cold-formed steel columns have at least three competing buckling modes: local, dis and Euler~i.e., flexural or flexural-torsional ! buckling. Closed-form prediction of the buckling stress in the local mode, includ interaction of the connected elements, and the distortional mode, including consideration of the elastic and geometric stiffne web/flange juncture, are provided and shown to agree well with numerical methods. Numerical analyses and experiments postbuckling capacity in the distortional mode is lower than in the local mode. Current North American design specificati cold-formed steel columns ignore local buckling interaction and do not provide an explicit check for distortional buckling. E experiments on cold-formed channel, zed, and rack columns indicate inconsistency and systematic error in current design me provide validation for alternative methods. A new method is proposed for design that explicitly incorporates local, distortional an buckling, does not require calculations of effective width and/or effective properties, gives reliable predictions devoid of systema and provides a means to introduce rational analysis for elastic buckling prediction into the design of thin-walled columns. DOI: 10.1061/ ~ASCE!0733-9445~2002!128:3~289! CE Database keywords: Thin-wall structures; Columns; Buckling; Cold-formed steel.", "title": "" }, { "docid": "neg:1840269_1", "text": "In this paper we propose a new semi-supervised GAN architecture (ss-InfoGAN) for image synthesis that leverages information from few labels (as little as 0.22%, max. 10% of the dataset) to learn semantically meaningful and controllable data representations where latent variables correspond to label categories. The architecture builds on Information Maximizing Generative Adversarial Networks (InfoGAN) and is shown to learn both continuous and categorical codes and achieves higher quality of synthetic samples compared to fully unsupervised settings. Furthermore, we show that using small amounts of labeled data speeds-up training convergence. The architecture maintains the ability to disentangle latent variables for which no labels are available. Finally, we contribute an information-theoretic reasoning on how introducing semi-supervision increases mutual information between synthetic and real data.", "title": "" }, { "docid": "neg:1840269_2", "text": "Barrier coverage is a critical issue in wireless sensor networks for security applications (e.g., border protection) where directional sensors (e.g., cameras) are becoming more popular than omni-directional scalar sensors (e.g., microphones). However, barrier coverage cannot be guaranteed after initial random deployment of sensors, especially for directional sensors with limited sensing angles. In this paper, we study how to efficiently use mobile sensors to achieve \\(k\\) -barrier coverage. In particular, two problems are studied under two scenarios. First, when only the stationary sensors have been deployed, what is the minimum number of mobile sensors required to form \\(k\\) -barrier coverage? Second, when both the stationary and mobile sensors have been pre-deployed, what is the maximum number of barriers that could be formed? To solve these problems, we introduce a novel concept of weighted barrier graph (WBG) and prove that determining the minimum number of mobile sensors required to form \\(k\\) -barrier coverage is related with finding \\(k\\) vertex-disjoint paths with the minimum total length on the WBG. With this observation, we propose an optimal solution and a greedy solution for each of the two problems. Both analytical and experimental studies demonstrate the effectiveness of the proposed algorithms.", "title": "" }, { "docid": "neg:1840269_3", "text": "This paper presents an S-Transform based probabilistic neural network (PNN) classifier for recognition of power quality (PQ) disturbances. The proposed method requires less number of features as compared to wavelet based approach for the identification of PQ events. The features extracted through the S-Transform are trained by a PNN for automatic classification of the PQ events. Since the proposed methodology can reduce the features of the disturbance signal to a great extent without losing its original property, less memory space and learning PNN time are required for classification. Eleven types of disturbances are considered for the classification problem. The simulation results reveal that the combination of S-Transform and PNN can effectively detect and classify different PQ events. The classification performance of PNN is compared with a feedforward multilayer (FFML) neural network (NN) and learning vector quantization (LVQ) NN. It is found that the classification performance of PNN is better than both FFML and LVQ.", "title": "" }, { "docid": "neg:1840269_4", "text": "Most exact algorithms for solving partially observable Markov decision processes (POMDPs) are based on a form of dynamic programming in which a piecewise-linear and convex representation of the value function is updated at every iteration to more accurately approximate the true value function. However, the process is computationally expensive, thus limiting the practical application of POMDPs in planning. To address this current limitation, we present a parallel distributed algorithm based on the Restricted Region method proposed by Cassandra, Littman and Zhang [1]. We compare performance of the parallel algorithm against a serial implementation Restricted Region.", "title": "" }, { "docid": "neg:1840269_5", "text": "Cloud computing's pay-per-use model greatly reduces upfront cost and also enables on-demand scalability as service demand grows or shrinks. Hybrid clouds are an attractive option in terms of cost benefit, however, without proper elastic resource management, computational resources could be over-provisioned or under-provisioned, resulting in wasting money or failing to satisfy service demand. In this paper, to accomplish accurate performance prediction and cost-optimal resource management for hybrid clouds, we introduce Workload-tailored Elastic Compute Units (WECU) as a measure of computing resources analogous to Amazon EC2's ECUs, but customized for a specific workload. We present a dynamic programming-based scheduling algorithm to select a combination of private and public resources which satisfy a desired throughput. Using a loosely-coupled benchmark, we confirmed WECUs have 24 (J% better runtime prediction ability than ECUs on average. Moreover, simulation results with a real workload distribution of web service requests show that our WECU-based algorithm reduces costs by 8-31% compared to a fixed provisioning approach.", "title": "" }, { "docid": "neg:1840269_6", "text": "In recent years, data mining researchers have developed efficient association rule algorithms for retail market basket analysis. Still, retailers often complain about how to adopt association rules to optimize concrete retail marketing-mix decisions. It is in this context that, in a previous paper, the authors have introduced a product selection model called PROFSET. This model selects the most interesting products from a product assortment based on their cross-selling potential given some retailer defined constraints. However this model suffered from an important deficiency: it could not deal effectively with supermarket data, and no provisions were taken to include retail category management principles. Therefore, in this paper, the authors present an important generalization of the existing model in order to make it suitable for supermarket data as well, and to enable retailers to add category restrictions to the model. Experiments on real world data obtained from a Belgian supermarket chain produce very promising results and demonstrate the effectiveness of the generalized PROFSET model.", "title": "" }, { "docid": "neg:1840269_7", "text": "The class scheduling problem can be modeled by a graph where the vertices and edges represent the courses and the common students, respectively. The problem is to assign the courses a given number of time slots (colors), where each time slot can be used for a given number of class rooms. The Vertex Coloring (VC) algorithm is a polynomial time algorithm which produces a conflict free solution using the least number of colors [9]. However, the VC solution may not be implementable because it uses a number of time slots that exceed the available ones with unbalanced use of class rooms. We propose a heuristic approach VC* to (1) promote uniform distribution of courses over the colors and to (2) balance course load for each time slot over the available class rooms. The performance function represents the percentage of students in all courses that could not be mapped to time slots or to class rooms. A randomized simulation of registration of four departments with up to 1200 students is used to evaluate the performance of proposed heuristic.", "title": "" }, { "docid": "neg:1840269_8", "text": "Continuous vector representations of words and objects appear to carry surprisingly rich semantic content. In this paper, we advance both the conceptual and theoretical understanding of word embeddings in three ways. First, we ground embeddings in semantic spaces studied in cognitivepsychometric literature and introduce new evaluation tasks. Second, in contrast to prior work, we take metric recovery as the key object of study, unify existing algorithms as consistent metric recovery methods based on co-occurrence counts from simple Markov random walks, and propose a new recovery algorithm. Third, we generalize metric recovery to graphs and manifolds, relating co-occurence counts on random walks in graphs and random processes on manifolds to the underlying metric to be recovered, thereby reconciling manifold estimation and embedding algorithms. We compare embedding algorithms across a range of tasks, from nonlinear dimensionality reduction to three semantic language tasks, including analogies, sequence completion, and classification.", "title": "" }, { "docid": "neg:1840269_9", "text": "Electronic fashion (eFashion) garments use technology to augment the human body with wearable interaction. In developing ideas, eFashion designers need to prototype the role and behavior of the interactive garment in context; however, current wearable prototyping toolkits require semi-permanent construction with physical materials that cannot easily be altered. We present Bod-IDE, an augmented reality 'mirror' that allows eFashion designers to create virtual interactive garment prototypes. Designers can quickly build, refine, and test on-the-body interactions without the need to connect or program electronics. By envisioning interaction with the body in mind, eFashion designers can focus more on reimagining the relationship between bodies, clothing, and technology.", "title": "" }, { "docid": "neg:1840269_10", "text": "R ecommender systems have become important tools in ecommerce. They combine one user’s ratings of products or services with ratings from other users to answer queries such as “Would I like X?” with predictions and suggestions. Users thus receive anonymous recommendations from people with similar tastes. While this process seems innocuous, it aggregates user preferences in ways analogous to statistical database queries, which can be exploited to identify information about a particular user. This is especially true for users with eclectic tastes who rate products across different types or domains in the systems. These straddlers highlight the conflict between personalization and privacy in recommender systems. While straddlers enable serendipitous recommendations, information about their existence could be used in conjunction with other data sources to uncover identities and reveal personal details. We use a graph-theoretic model to study the benefit from and risk to straddlers.", "title": "" }, { "docid": "neg:1840269_11", "text": "During the last years, in particular due to the Digital Humanities, empirical processes, data capturing or data analysis got more and more popular as part of humanities research. In this paper, we want to show that even the complete scientific method of natural science can be applied in the humanities. By applying the scientific method to the humanities, certain kinds of problems can be solved in a confirmable and replicable manner. In particular, we will argue that patterns may be perceived as the analogon to formulas in natural science. This may provide a new way of representing solution-oriented knowledge in the humanities. Keywords-pattern; pattern languages; digital humanities;", "title": "" }, { "docid": "neg:1840269_12", "text": "This paper describes a CMOS-based time-of-flight depth sensor and presents some experimental data while addressing various issues arising from its use. Our system is a single-chip solution based on a special CMOS pixel structure that can extract phase information from the received light pulses. The sensor chip integrates a 64x64 pixel array with a high-speed clock generator and ADC. A unique advantage of the chip is that it can be manufactured with an ordinary CMOS process. Compared with other types of depth sensors reported in the literature, our solution offers significant advantages, including superior accuracy, high frame rate, cost effectiveness and a drastic reduction in processing required to construct the depth maps. We explain the factors that determine the resolution of our system, discuss various problems that a time-of-flight depth sensor might face, and propose practical solutions.", "title": "" }, { "docid": "neg:1840269_13", "text": "Research and industry increasingly make use of large amounts of data to guide decision-making. To do this, however, data needs to be analyzed in typically nontrivial refinement processes, which require technical expertise about methods and algorithms, experience with how a precise analysis should proceed, and knowledge about an exploding number of analytic approaches. To alleviate these problems, a plethora of different systems have been proposed that “intelligently” help users to analyze their data.\n This article provides a first survey to almost 30 years of research on intelligent discovery assistants (IDAs). It explicates the types of help IDAs can provide to users and the kinds of (background) knowledge they leverage to provide this help. Furthermore, it provides an overview of the systems developed over the past years, identifies their most important features, and sketches an ideal future IDA as well as the challenges on the road ahead.", "title": "" }, { "docid": "neg:1840269_14", "text": "The purpose of this research was to explore the experiences of left-handed adults. Four semistructured interviews were conducted with left-handed adults (2 men and 2 women) about their experiences. After transcribing the data, interpretative phenomenological analysis (IPA), which is a qualitative approach, was utilized to analyze the data. The analysis highlighted some major themes which were organized to make a model of life experiences of left-handers. The highlighted themes included Left-handers‟ Development: Interplay of Heredity Basis and Environmental Influences, Suppression of Left-hand, Support and Consideration, Feeling It Is Okay, Left-handers as Being Particular, Physical and Psychological Health Challenges, Agonized Life, Struggle for Maintaining Identity and Transforming Attitude, Attitudinal Barriers to Equality and Acceptance. Implications of the research for parents, teachers, and psychologists are discussed.", "title": "" }, { "docid": "neg:1840269_15", "text": "A key goal of smart grid initiatives is significantly increasing the fraction of grid energy contributed by renewables. One challenge with integrating renewables into the grid is that their power generation is intermittent and uncontrollable. Thus, predicting future renewable generation is important, since the grid must dispatch generators to satisfy demand as generation varies. While manually developing sophisticated prediction models may be feasible for large-scale solar farms, developing them for distributed generation at millions of homes throughout the grid is a challenging problem. To address the problem, in this paper, we explore automatically creating site-specific prediction models for solar power generation from National Weather Service (NWS) weather forecasts using machine learning techniques. We compare multiple regression techniques for generating prediction models, including linear least squares and support vector machines using multiple kernel functions. We evaluate the accuracy of each model using historical NWS forecasts and solar intensity readings from a weather station deployment for nearly a year. Our results show that SVM-based prediction models built using seven distinct weather forecast metrics are 27% more accurate for our site than existing forecast-based models.", "title": "" }, { "docid": "neg:1840269_16", "text": "Accurate keypoint localization of human pose needs diversified features: the high level for contextual dependencies and the low level for detailed refinement of joints. However, the importance of the two factors varies from case to case, but how to efficiently use the features is still an open problem. Existing methods have limitations in preserving low level features, adaptively adjusting the importance of different levels of features, and modeling the human perception process. This paper presents three novel techniques step by step to efficiently utilize different levels of features for human pose estimation. Firstly, an inception of inception (IOI) block is designed to emphasize the low level features. Secondly, an attention mechanism is proposed to adjust the importance of individual levels according to the context. Thirdly, a cascaded network is proposed to sequentially localize the joints to enforce message passing from joints of stand-alone parts like head and torso to remote joints like wrist or ankle. Experimental results demonstrate that the proposed method achieves the state-of-the-art performance on both MPII and", "title": "" }, { "docid": "neg:1840269_17", "text": "In this paper, we propose a novel NoC architecture, called darkNoC, where multiple layers of architecturally identical, but physically different routers are integrated, leveraging the extra transistors available due to dark silicon. Each layer is separately optimized for a particular voltage-frequency range by the adroit use of multi-Vt circuit optimization. At a given time, only one of the network layers is illuminated while all the other network layers are dark. We provide architectural support for seamless integration of multiple network layers, and a fast inter-layer switching mechanism without dropping in-network packets. Our experiments on a 4 × 4 mesh with multi-programmed real application workloads show that darkNoC improves energy-delay product by up to 56% compared to a traditional single layer NoC with state-of-the-art DVFS. This illustrates darkNoC can be used as an energy-efficient communication fabric in future dark silicon chips.", "title": "" }, { "docid": "neg:1840269_18", "text": "54-year-old male with a history of necrolytic migratory erythema (NME) (Figs. 1 and 2) and glossitis. Routine blood tests were normal except for glucose: 145 mg/dL. Baseline plasma glucagon levels were 1200 pg/mL. Serum zinc was 97 mcg/dL. An abdominal CT scan showed a large mass involving the body and tail of the pancreas. A dis-tal pancreatectomy with splenectomy was performed and no evidence of metastatic disease was observed. The skin rash cleared within a week after the operation and the patient remains free of disease at 38 months following surgery. COMMENTS Glucagonoma syndrome is a paraneoplastic phenomenon comprising a pancreatic glucagon-secreting insular tumor, necrolytic migratory erythema (NME), diabetes, weight loss, anemia, stomatitis, thrombo-embolism, dyspepsia, and neu-ropsychiatric disturbances. The occurrence of one or more of these symptoms associated with a proven pancreatic neo-plasm fits this diagnosis (1). Other skin and mucosal changes such as atrophic glossitis, cheylitis, and inflammation of the oral mucosa may be found (2). Hyperglycemia may be included –multiple endocrine neoplasia syndrome, i.e., Zollinger-Ellison syndrome– or the disease may result from a glucagon-secreting tumor alone. These tumors are of slow growth and present with non-specific symptoms in early stages. At least 50% are metasta-tic at the time of diagnosis, and therefore have a poor prognosis. Fig. 2.-NME. Necrolytic migratory erythema. ENM. Eritema necrolítico migratorio.", "title": "" }, { "docid": "neg:1840269_19", "text": "We study session key distribution in the three-party setting of Needham and Schroeder. (This is the trust model assumed by the popular Kerberos authentication system.) Such protocols are basic building blocks for contemporary distributed systems|yet the underlying problem has, up until now, lacked a de nition or provably-good solution. One consequence is that incorrect protocols have proliferated. This paper provides the rst treatment of this problem in the complexitytheoretic framework of modern cryptography. We present a de nition, protocol, and a proof that the protocol satis es the de nition, assuming the (minimal) assumption of a pseudorandom function. When this assumption is appropriately instantiated, our protocols are simple and e cient. Abstract appearing in Proceedings of the 27th ACM Symposium on the Theory of Computing, May 1995.", "title": "" } ]
1840270
Classifying and visualizing motion capture sequences using deep neural networks
[ { "docid": "pos:1840270_0", "text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.", "title": "" } ]
[ { "docid": "neg:1840270_0", "text": "We provide a systematic review of the adaptive comanagement (ACM) literature to (i) investigate how the concept of governance is considered and (ii) examine what insights ACM offers with reference to six key concerns in environmental governance literature: accountability and legitimacy; actors and roles; fit, interplay, and scale; adaptiveness, flexibility, and learning; evaluation and monitoring; and, knowledge. Findings from the systematic review uncover a complicated relationship with evidence of conceptual closeness as well as relational ambiguities. The findings also reveal several specific contributions from the ACM literature to each of the six key environmental governance concerns, including applied strategies for sharing power and responsibility and value of systems approaches in understanding problems of fit. More broadly, the research suggests a dissolving or fuzzy boundary between ACM and governance, with implications for understanding emerging approaches to navigate social-ecological system change. Future research opportunities may be found at the confluence of ACM and environmental governance scholarship, such as identifying ways to build adaptive capacity and encouraging the development of more flexible governance arrangements.", "title": "" }, { "docid": "neg:1840270_1", "text": "A number of leading cognitive architectures that are inspired by the human brain, at various levels of granularity, are reviewed and compared, with special attention paid to the way their internal structures and dynamics map onto neural processes. Four categories of Biologically Inspired Cognitive Architectures (BICAs) are considered, with multiple examples of each category briefly reviewed, and selected examples discussed in more depth: primarily symbolic architectures (e.g. ACT-R), emergentist architectures (e.g. DeSTIN), developmental robotics architectures (e.g. IM-CLEVER), and our central focus, hybrid architectures (e.g. LIDA, CLARION, 4D/RCS, DUAL, MicroPsi, and OpenCog). Given the state of the art in BICA, it is not yet possible to tell whether emulating the brain on the architectural level is going to be enough to allow rough emulation of brain function; and given the state of the art in neuroscience, it is not yet possible to connect BICAs with large-scale brain simulations in a thoroughgoing way. However, it is nonetheless possible to draw reasonably close function connections between various components of various BICAs and various brain regions and dynamics, and as both BICAs and brain simulations mature, these connections should become richer and may extend further into the domain of internal dynamics as well as overall behavior. & 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840270_2", "text": "The merging of optimization and simulation technologies has seen a rapid growth in recent years. A Google search on \"Simulation Optimization\" returns more than six thousand pages where this phrase appears. The content of these pages ranges from articles, conference presentations and books to software, sponsored work and consultancy. This is an area that has sparked as much interest in the academic world as in practical settings. In this paper, we first summarize some of the most relevant approaches that have been developed for the purpose of optimizing simulated systems. We then concentrate on the metaheuristic black-box approach that leads the field of practical applications and provide some relevant details of how this approach has been implemented and used in commercial software. Finally, we present an example of simulation optimization in the context of a simulation model developed to predict performance and measure risk in a real world project selection problem.", "title": "" }, { "docid": "neg:1840270_3", "text": "PET image reconstruction is challenging due to the ill-poseness of the inverse problem and limited number of detected photons. Recently, the deep neural networks have been widely and successfully used in computer vision tasks and attracted growing interests in medical imaging. In this paper, we trained a deep residual convolutional neural network to improve PET image quality by using the existing inter-patient information. An innovative feature of the proposed method is that we embed the neural network in the iterative reconstruction framework for image representation, rather than using it as a post-processing tool. We formulate the objective function as a constrained optimization problem and solve it using the alternating direction method of multipliers algorithm. Both simulation data and hybrid real data are used to evaluate the proposed method. Quantification results show that our proposed iterative neural network method can outperform the neural network denoising and conventional penalized maximum likelihood methods.", "title": "" }, { "docid": "neg:1840270_4", "text": "The concept of knowledge management (KM) as a powerful competitive weapon has been strongly emphasized in the strategic management literature, yet the sustainability of the competitive advantage provided by KM capability is not well-explained. To fill this gap, this paper develops the concept of KM as an organizational capability and empirically examines the association between KM capabilities and competitive advantage. In order to provide a better presentation of significant relationships, through resource-based view of the firm explicitly recognizes important of KM resources and capabilities. Firm specific KM resources are classified as social KM resources, and technical KM resources. Surveys collected from 177 firms were analyzed and tested. The results confirmed the impact of social KM resource on competitive advantage. Technical KM resource is negatively related with competitive advantage, and KM capability is significantly related with competitive advantage. q 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840270_5", "text": "Text detection and recognition in a natural environment are key components of many applications, ranging from business card digitization to shop indexation in a street. This competition aims at assessing the ability of state-of-the-art methods to detect Multi-Lingual Text (MLT) in scene images, such as in contents gathered from the Internet media and in modern cities where multiple cultures live and communicate together. This competition is an extension of the Robust Reading Competition (RRC) which has been held since 2003 both in ICDAR and in an online context. The proposed competition is presented as a new challenge of the RRC. The dataset built for this challenge largely extends the previous RRC editions in many aspects: the multi-lingual text, the size of the dataset, the multi-oriented text, the wide variety of scenes. The dataset is comprised of 18,000 images which contain text belonging to 9 languages. The challenge is comprised of three tasks related to text detection and script classification. We have received a total of 16 participations from the research and industrial communities. This paper presents the dataset, the tasks and the findings of this RRC-MLT challenge.", "title": "" }, { "docid": "neg:1840270_6", "text": "Fast incipient machine fault diagnosis is becoming one of the key requirements for economical and optimal process operation management. Artificial neural networks have been used to detect machine faults for a number of years and shown to be highly successful in this application area. This paper presents a novel test technique for machine fault detection and classification in electro-mechanical machinery from vibration measurements using one-class support vector machines (SVMs). In order to evaluate one-class SVMs, this paper examines the performance of the proposed method by comparing it with that of multilayer perception, one of the artificial neural network techniques, based on real benchmarking data. q 2005 Published by Elsevier Ltd.", "title": "" }, { "docid": "neg:1840270_7", "text": "This paper proposes a methodology for the creation of specialized data sets for Textual Entailment, made of monothematic Text-Hypothesis pairs (i.e. pairs in which only one linguistic phenomenon relevant to the entailment relation is highlighted and isolated). The annotation procedure assumes that humans have knowledge about the linguistic phenomena relevant to inference, and a classification of such phenomena both into fine grained and macro categories is suggested. We experimented with the proposed methodology over a sample of pairs taken from the RTE-5 data set, and investigated critical issues arising when entailment, contradiction or unknown pairs are considered. The result is a new resource, which can be profitably used both to advance the comprehension of the linguistic phenomena relevant to entailment judgments and to make a first step towards the creation of large-scale specialized data sets.", "title": "" }, { "docid": "neg:1840270_8", "text": "For the first half century of animal virology, the major problem was lack of a simple method for quantitating infectious virus particles; the only method available at that time was some form or other of the serial-dilution end-point method in animals, all of which were both slow and expensive. Cloned cultured animal cells, which began to be available around 1950, provided Dulbecco with a new approach. He adapted the technique developed by Emory Ellis and Max Delbrück for assaying bacteriophage, that is, seeding serial dilutions of a given virus population onto a confluent lawn of host cells, to the measurement of Western equine encephalitis virus, and demonstrated that it also formed easily countable plaques in monolayers of chick embryo fibroblasts. The impact of this finding was enormous; animal virologists had been waiting for such a technique for decades. It was immediately found to be widely applicable to many types of cells and most viruses, gained quick acceptance, and is widely regarded as marking the beginning of molecular animal virology. Renato Dulbecco was awarded the Nobel Prize in 1975. W. K. JOKLIK", "title": "" }, { "docid": "neg:1840270_9", "text": "The use of metal casing is attractive to achieve robustness of modern slim tablet devices. The metal casing includes the metal back cover and the metal frame around the edges thereof. For such metal-casing tablet devices, the frame antenna that uses a part of the metal frame as an antenna's radiator is promising to achieve wide bandwidths for mobile communications. In this paper, the frame antenna based on the simple half-loop antenna structure to cover the long-term evolution 746-960 and 1710-2690 MHz bands is presented. The half-loop structure for the frame antenna is easy for manufacturing and increases the robustness of the metal casing. The dual-wideband operation of the half-loop frame antenna is obtained by using an elevated feed network supported by a thin feed substrate. The measured antenna efficiencies are, respectively, 45%-69% and 60%-83% in the low and high bands. By selecting different feed circuits, the antenna's low band can also be shifted from 746-960 MHz to lower frequencies such as 698-840 MHz, with the antenna's high-band coverage very slightly varied. The working principle of the antenna with the elevated feed network is discussed. The antenna is also fabricated and tested, and experimental results are presented.", "title": "" }, { "docid": "neg:1840270_10", "text": "Within the information overload on the web and the diversity of the user interests, it is increasingly difficult for search engines to satisfy the user information needs. Personalized search tackles this problem by considering the user profile during the search. This paper describes a personalized search approach involving a semantic graph-based user profile issued from ontology. User profile refers to the user interest in a specific search session defined as a sequence of related queries. It is built using a score propagation that activates a set of semantically related concepts and maintained in the same search session using a graph-based merging scheme. We also define a session boundary recognition mechanism based on tracking changes in the dominant concepts held by the user profile relatively to a new submitted query using the Kendall rank correlation measure. Then, personalization is achieved by re-ranking the search results of related queries using the user profile. Our experimental evaluation is carried out using the HARD 2003 TREC collection and shows that our approach is effective.", "title": "" }, { "docid": "neg:1840270_11", "text": "Recently several different deep learning architectures have been proposed that take a string of characters as the raw input signal and automatically derive features for text classification. Few studies are available that compare the effectiveness of these approaches for character based text classification with each other. In this paper we perform such an empirical comparison for the important cybersecurity problem of DGA detection: classifying domain names as either benign vs. produced by malware (i.e., by a Domain Generation Algorithm). Training and evaluating on a dataset with 2M domain names shows that there is surprisingly little difference between various convolutional neural network (CNN) and recurrent neural network (RNN) based architectures in terms of accuracy, prompting a preference for the simpler architectures, since they are faster to train and to score, and less prone to overfitting.", "title": "" }, { "docid": "neg:1840270_12", "text": "5G will have to cope with a high degree of heterogeneity in terms of services and requirements. Among these latter, the flexible and efficient use of non-contiguous unused spectrum for different network deployment scenarios is considered a key challenge for 5G systems. To maximize spectrum efficiency, the 5G air interface technology will also need to be flexible and capable of mapping various services to the best suitable combinations of frequency and radio resources. In this work, we propose a comparison of several 5G waveform candidates (OFDM, UFMC, FBMC and GFDM) under a common framework. We assess spectral efficiency, power spectral density, peak-to-average power ratio and robustness to asynchronous multi-user uplink transmission. Moreover, we evaluate and compare the complexity of the different waveforms. In addition to the complexity analysis, in this work, we also demonstrate the suitability of FBMC for specific 5G use cases via two experimental implementations. The benefits of these new waveforms for the foreseen 5G use cases are clearly highlighted on representative criteria and experiments.", "title": "" }, { "docid": "neg:1840270_13", "text": "The success of Android phones makes them a prominent target for malicious software, in particular since the Android permission system turned out to be inadequate to protect the user against security and privacy threats. This work presents AppGuard, a powerful and flexible system for the enforcement of user-customizable security policies on untrusted Android applications. AppGuard does not require any changes to a smartphone’s firmware or root access. Our system offers complete mediation of security-relevant methods based on callee-site inline reference monitoring. We demonstrate the general applicability of AppGuard by several case studies, e.g., removing permissions from overly curious apps as well as defending against several recent real-world attacks on Android phones. Our technique exhibits very little space and runtime overhead. AppGuard is publicly available, has been invited to the Samsung Apps market, and has had more than 500,000 downloads so far.", "title": "" }, { "docid": "neg:1840270_14", "text": "Aim: The purpose of this paper is to present findings of an integrative literature review related to employees’ motivational practices in organizations. Method: A broad search of computerized databases focusing on articles published in English during 1999– 2010 was completed. Extensive screening sought to determine current literature themes and empirical research evidence completed in employees’ focused specifically on motivation in organization. Results: 40 articles are included in this integrative literature review. The literature focuses on how job characteristics, employee characteristic, management practices and broader environmental factors influence employees’ motivation. Research that links employee’s motivation is both based on qualitative and quantitative studies. Conclusion: This literature reveals widespread support of motivation concepts in organizations. Theoretical and editorial literature confirms motivation concepts are central to employees. Job characteristics, management practices, employee characteristics and broader environmental factors are the key variables influence employees’ motivation in organization.", "title": "" }, { "docid": "neg:1840270_15", "text": "This paper sets out to detect controversial news reports using online discussions as a source of information. We define controversy as a public discussion that divides society and demonstrate that a content and stylometric analysis of these debates yields useful signals for extracting disputed news items. Moreover, we argue that a debate-based approach could produce more generic models, since the discussion architectures we exploit to measure controversy occur on many different platforms.", "title": "" }, { "docid": "neg:1840270_16", "text": "A novel one-section bandstop filter (BSF), which possesses the characteristics of compact size, wide bandwidth, and low insertion loss is proposed and fabricated. This bandstop filter was constructed by using single quarter-wavelength resonator with one section of anti-coupled lines with short circuits at one end. The attenuation-pole characteristics of this type of bandstop filters are investigated through TEM transmission-line model. Design procedures are clearly presented. The 3-dB bandwidth of the first stopband and insertion loss of the first passband of this BSF is from 2.3 GHz to 9.5 GHz and below 0.3 dB, respectively. There is good agreement between the simulated and experimental results.", "title": "" }, { "docid": "neg:1840270_17", "text": "In this paper, two main contributions are presented to manage the power flow between a 11 wind turbine and a solar power system. The first one is to use the fuzzy logic controller as an 12 objective to find the maximum power point tracking, applied to a hybrid wind-solar system, at fixed 13 atmospheric conditions. The second one is to response to real-time control system constraints and 14 to improve the generating system performance. For this, a hardware implementation of the 15 proposed algorithm is performed using the Xilinx system generator. The experimental results show 16 that the suggested system presents high accuracy and acceptable execution time performances. The 17 proposed model and its control strategy offer a proper tool for optimizing the hybrid power system 18 performance which we can use in smart house applications. 19", "title": "" }, { "docid": "neg:1840270_18", "text": "Consideration of confounding is fundamental to the design and analysis of studies of causal effects. Yet, apart from confounding in experimental designs, the topic is given little or no discussion in most statistics texts. We here provide an overview of confounding and related concepts based on a counterfactual model for causation. Special attention is given to definitions of confounding, problems in control of confounding, the relation of confounding to exchangeability and collapsibility, and the importance of distinguishing confounding from noncollapsibility.", "title": "" }, { "docid": "neg:1840270_19", "text": "In this paper, we present the first formal study of how mothers of young children (aged three and under) use social networking sites, particularly Facebook and Twitter, including mothers' perceptions of which SNSes are appropriate for sharing information about their children, changes in post style and frequency after birth, and the volume and nature of child-related content shared in these venues. Our findings have implications for improving the utility and usability of SNS tools for mothers of young children, as well as for creating and improving sociotechnical systems related to maternal and child health.", "title": "" } ]
1840271
SPAM: Signal Processing to Analyze Malware
[ { "docid": "pos:1840271_0", "text": "While navigating in an environment, a vision system has to be able to recognize where it is and what the main objects in the scene are. In this paper we present a contextbased vision system for place and object recognition. The goal is to identify familiar locations (e.g., office 610, conference room 941, Main Street), to categorize new environments (office, corridor, street) and to use that information to provide contextual priors for object recognition (e.g., tables are more likely in an office than a street). We present a low-dimensional global image representation that provides relevant information for place recognition and categorization, and show how such contextual information introduces strong priors that simplify object recognition. We have trained the system to recognize over 60 locations (indoors and outdoors) and to suggest the presence and locations of more than 20 different object types. The algorithm has been integrated into a mobile system that provides realtime feedback to the user.", "title": "" }, { "docid": "pos:1840271_1", "text": "In this work, we propose SigMal, a fast and precise malware detection framework based on signal processing techniques. SigMal is designed to operate with systems that process large amounts of binary samples. It has been observed that many samples received by such systems are variants of previously-seen malware, and they retain some similarity at the binary level. Previous systems used this notion of malware similarity to detect new variants of previously-seen malware. SigMal improves the state-of-the-art by leveraging techniques borrowed from signal processing to extract noise-resistant similarity signatures from the samples. SigMal uses an efficient nearest-neighbor search technique, which is scalable to millions of samples. We evaluate SigMal on 1.2 million recent samples, both packed and unpacked, observed over a duration of three months. In addition, we also used a constant dataset of known benign executables. Our results show that SigMal can classify 50% of the recent incoming samples with above 99% precision. We also show that SigMal could have detected, on average, 70 malware samples per day before any antivirus vendor detected them.", "title": "" } ]
[ { "docid": "neg:1840271_0", "text": "Reinforcement learning (RL) can automate a wide variety of robotic skills, but learning each new skill requires considerable real-world data collection and manual representation engineering to design policy classes or features. Using deep reinforcement learning to train general purpose neural network policies alleviates some of the burden of manual representation engineering by using expressive policy classes, but exacerbates the challenge of data collection, since such methods tend to be less efficient than RL with low-dimensional, hand-designed representations. Transfer learning can mitigate this problem by enabling us to transfer information from one skill to another and even from one robot to another. We show that neural network policies can be decomposed into “task-specific” and “robot-specific” modules, where the task-specific modules are shared across robots, and the robot-specific modules are shared across all tasks on that robot. This allows for sharing task information, such as perception, between robots and sharing robot information, such as dynamics and kinematics, between tasks. We exploit this decomposition to train mix-and-match modules that can solve new robot-task combinations that were not seen during training. Using a novel approach to train modular neural networks, we demonstrate the effectiveness of our transfer method for enabling zero-shot generalization with a variety of robots and tasks in simulation for both visual and non-visual tasks.", "title": "" }, { "docid": "neg:1840271_1", "text": "Dense kernel matrices Θ ∈ RN×N obtained from point evaluations of a covariance function G at locations {xi}1≤i≤N arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green’s functions of elliptic boundary value problems and approximately equally spaced sampling points, we show how to identify a subset S ⊂ {1, . . . , N} × {1, . . . , N}, with #S = O(N log(N) log(N/ )), such that the zero fill-in incomplete Cholesky factorisation of Θi,j1(i,j)∈S is an -approximation of Θ. This blockfactorisation can provably be obtained in complexity O(N log(N) log(N/ )) in space and O(N log(N) log(N/ )) in time. The algorithm only needs to know the spatial configuration of the xi and does not require an analytic representation of G. Furthermore, an approximate PCA with optimal rate of convergence in the operator norm can be easily read off from this decomposition. Hence, by using only subsampling and the incomplete Cholesky decomposition, we obtain at nearly linear complexity the compression, inversion and approximate PCA of a large class of covariance matrices. By inverting the order of the Cholesky decomposition we also obtain a solver for elliptic PDE with complexity O(N log(N) log(N/ )) in space and O(N log(N) log(N/ )) in time.", "title": "" }, { "docid": "neg:1840271_2", "text": "The work presented in this paper addresses the problem of the stitching task in laparoscopic surgery using a circular needle and a conventional 4 DOFs needle-holder. This task is particularly difficult for the surgeons because of the kinematic constraints generated by the fulcrum in the abdominal wall of the patient. In order to assist the surgeons during suturing, we propose to compute possible pathes for the needle through the tissues, which limit as much as possible tissues deformations while driving the needle towards the desired target. The paper proposes a kinematic analysis and a geometric modeling of the stitching task from which useful information can be obtained to assist the surgeon. The description of the task with appropriate state variables allows to propose a simple practical method for computing optimal pathes. Simulations show that the obtained pathes are satisfactory even under difficult configurations and short computation times make the technique useful for intra-operative planning. The use of the stitching planning is shown for manual as well as for robot-assisted interventions in in vitro experiments. 1 Clinical and technical motivations Computer assisted interventions have been widely developed during the last decade. Minimally invasive surgery is one of the surgical fields where the computer assistance is particularly useful for intervention planning as well as for gesture guidance because the access to the areas of interest is considerably restricted. In keyhole surgery such as laparoscopy or thoracoscopy, the surgical instruments are introduced into the abdomen through small 1 tissue to be sutured needle O∗ exit point I∗ entry point endoscope abdominal wall trocar needle-holder Figure 1: Outline of the laparoscopic setup for suturing with desired entry point (I*) and exit point (O*) incisions in the abdominal wall. The vision of the surgical scene is given to the surgeon via an endoscopic camera also introduced in the abdomen through a similar incision (see figure 1). This kind of surgery has many advantages over conventional open surgery : less pain for the patient, reduced risk of infection, decreased hospital stays and recovery times. However the surgical tasks are more difficult than in conventional surgery which often results in longer interventions and tiredness for the surgeon. As a consequence, laparoscopic surgery requires a specific training of the surgeon [12]. The main origins of the difficulties for the surgeon are well identified [30]. Firstly, the fulcrum reduces the possible motions of the instruments and creates kinematic constraints. Secondly, surgeons encounter perceptual limitations : tactile sensing is higly reduced and the visual feedback given by the endoscopic camera (laparoscope) is bidimendional and with a restricted field of view. Hence, it is difficult for the surgeon to estimate depths, relative positions and angles inside the abdomen. Finally there are perceptual motor difficulties which affect the coordination between what the surgeon sees and the movements he performs. According to laparoscopic surgery specialists, the suturing task is one of the most difficult gestures in laparoscopic surgery[4]. The task is usually performed using two instruments : a needle, often circular, and a needle-holder (see figure 2). Suturing can be", "title": "" }, { "docid": "neg:1840271_3", "text": "We present results of an empirical study of the usefulness of different types of features in selecting extractive summaries of news broadcasts for our Broadcast News Summarization System. We evaluate lexical, prosodic, structural and discourse features as predictors of those news segments which should be included in a summary. We show that a summarization system that uses a combination of these feature sets produces the most accurate summaries, and that a combination of acoustic/prosodic and structural features are enough to build a ‘good’ summarizer when speech transcription is not available.", "title": "" }, { "docid": "neg:1840271_4", "text": "The purpose of the present study is to provide useful data that could be applied to various types of periodontal plastic surgery by detailing the topography of the greater palatine artery (GPA), looking in particular at its depth from the palatal masticatory mucosa (PMM) and conducting a morphometric analysis of the palatal vault. Forty-three hemisectioned hard palates from embalmed Korean adult cadavers were used in this study. The morphometry of the palatal vault was analyzed, and then the specimens were decalcified and sectioned. Six parameters were measured using an image-analysis system after performing a standard calibration. In one specimen, the PMM was separated from the hard palate and subjected to a partial Sihler's staining technique, allowing the branching pattern of the GPA to be observed in a new method. The distances between the GPA and the gingival margin, and between the GPA and the cementoenamel junction were greatest at the maxillary second premolar. The shortest vertical distance between the GPA and the PMM decreased gradually as it proceeded anteriorly. The GPA was located deeper in the high-vault group than in the low-vault group. The premolar region should be recommended as the optimal donor site for tissue grafting, and in particular the second premolar region. The maximum size and thickness of tissue that can be harvested from the region were 9.3 mm and 4.0 mm, respectively.", "title": "" }, { "docid": "neg:1840271_5", "text": "The popularity of location-based social networks available on mobile devices means that large, rich datasets that contain a mixture of behavioral (users visiting venues), social (links between users), and spatial (distances between venues) information are available for mobile location recommendation systems. However, these datasets greatly differ from those used in other online recommender systems, where users explicitly rate items: it remains unclear as to how they capture user preferences as well as how they can be leveraged for accurate recommendation. This paper seeks to bridge this gap with a three-fold contribution. First, we examine how venue discovery behavior characterizes the large check-in datasets from two different location-based social services, Foursquare and Go Walla: by using large-scale datasets containing both user check-ins and social ties, our analysis reveals that, across 11 cities, between 60% and 80% of users' visits are in venues that were not visited in the previous 30 days. We then show that, by making constraining assumptions about user mobility, state-of-the-art filtering algorithms, including latent space models, do not produce high quality recommendations. Finally, we propose a new model based on personalized random walks over a user-place graph that, by seamlessly combining social network and venue visit frequency data, obtains between 5 and 18% improvement over other models. Our results pave the way to a new approach for place recommendation in location-based social systems.", "title": "" }, { "docid": "neg:1840271_6", "text": "This paper presents an interactive face retrieval framework for clarifying an image representation envisioned by a user. Our system is designed for a situation in which the user wishes to find a person but has only visual memory of the person. We address a critical challenge of image retrieval across the user's inputs. Instead of target-specific information, the user can select several images (or a single image) that are similar to an impression of the target person the user wishes to search for. Based on the user's selection, our proposed system automatically updates a deep convolutional neural network. By interactively repeating these process (human-in-the-loop optimization), the system can reduce the gap between human-based similarities and computer-based similarities and estimate the target image representation. We ran user studies with 10 subjects on a public database and confirmed that the proposed framework is effective for clarifying the image representation envisioned by the user easily and quickly.", "title": "" }, { "docid": "neg:1840271_7", "text": "GPUs and accelerators have become ubiquitous in modern supercomputing systems. Scientific applications from a wide range of fields are being modified to take advantage of their compute power. However, data movement continues to be a critical bottleneck in harnessing the full potential of a GPU. Data in the GPU memory has to be moved into the host memory before it can be sent over the network. MPI libraries like MVAPICH2 have provided solutions to alleviate this bottleneck using techniques like pipelining. GPUDirect RDMA is a feature introduced in CUDA 5.0, that allows third party devices like network adapters to directly access data in GPU device memory, over the PCIe bus. NVIDIA has partnered with Mellanox to make this solution available for InfiniBand clusters. In this paper, we evaluate the first version of GPUDirect RDMA for InfiniBand and propose designs in MVAPICH2 MPI library to efficiently take advantage of this feature. We highlight the limitations posed by current generation architectures in effectively using GPUDirect RDMA and address these issues through novel designs in MVAPICH2. To the best of our knowledge, this is the first work to demonstrate a solution for internode GPU-to-GPU MPI communication using GPUDirect RDMA. Results show that the proposed designs improve the latency of internode GPU-to-GPU communication using MPI Send/MPI Recv by 69% and 32% for 4Byte and 128KByte messages, respectively. The designs boost the uni-directional bandwidth achieved using 4KByte and 64KByte messages by 2x and 35%, respectively. We demonstrate the impact of the proposed designs using two end-applications: LBMGPU and AWP-ODC. They improve the communication times in these applications by up to 35% and 40%, respectively.", "title": "" }, { "docid": "neg:1840271_8", "text": "Comparative genomic analyses of primates offer considerable potential to define and understand the processes that mold, shape, and transform the human genome. However, primate taxonomy is both complex and controversial, with marginal unifying consensus of the evolutionary hierarchy of extant primate species. Here we provide new genomic sequence (~8 Mb) from 186 primates representing 61 (~90%) of the described genera, and we include outgroup species from Dermoptera, Scandentia, and Lagomorpha. The resultant phylogeny is exceptionally robust and illuminates events in primate evolution from ancient to recent, clarifying numerous taxonomic controversies and providing new data on human evolution. Ongoing speciation, reticulate evolution, ancient relic lineages, unequal rates of evolution, and disparate distributions of insertions/deletions among the reconstructed primate lineages are uncovered. Our resolution of the primate phylogeny provides an essential evolutionary framework with far-reaching applications including: human selection and adaptation, global emergence of zoonotic diseases, mammalian comparative genomics, primate taxonomy, and conservation of endangered species.", "title": "" }, { "docid": "neg:1840271_9", "text": "Magnetic resonance (MR) is the best way to assess the new anatomy of the pelvis after male to female (MtF) sex reassignment surgery. The aim of the study was to evaluate the radiological appearance of the small pelvis after MtF surgery and to compare it with the normal women's anatomy. Fifteen patients who underwent MtF surgery were subjected to pelvic MR at least 6 months after surgery. The anthropometric parameters of the small pelvis were measured and compared with those of ten healthy women (control group). Our personal technique (creation of the mons Veneris under the pubic skin) was performed in all patients. In patients who underwent MtF surgery, the mean neovaginal depth was slightly superior than in women (P=0.009). The length of the inferior pelvic aperture and of the inlet of pelvis was higher in the control group (P<0.005). The inclination between the axis of the neovagina and the inferior pelvis aperture, the thickness of the mons Veneris and the thickness of the rectovaginal septum were comparable between the two study groups. MR consents a detailed assessment of the new pelvic anatomy after MtF surgery. The anthropometric parameters measured in our patients were comparable with those of women.", "title": "" }, { "docid": "neg:1840271_10", "text": "A circular polarizer is a single layer or multi-layer structure that converts linearly polarized waves into circularly polarized ones and vice versa. In this communication, a simple method based on transmission line circuit theory is proposed to model and design circular polarizers. This technique is more flexible than those previously presented in the way that it permits to design polarizers with the desired spacing between layers, while obtaining surfaces that may be easier to fabricate and less sensitive to fabrication errors. As an illustrating example, a modified version of the meander-line polarizer being twice as thin as its conventional counterpart is designed. Then, both polarizers are fabricated and measured. Results are shown and compared for normal and oblique incidence angles in the planes φ = 0° and φ = 90°.", "title": "" }, { "docid": "neg:1840271_11", "text": "This paper focuses on estimation of the vertical velocity of the vehicle chassis and the relative velocity between chassis and wheel. These velocities are important variables in semi-active suspension control. A model-based estimator is proposed including a Kalman filter and a non-linear model of the damper. Inputs to the estimator are signals from wheel displacement sensors and from accelerometers placed at the chassis. In addition, the control signal is used as input to the estimator. The Kalman filter is analyzed in the frequency domain and compared with a conventional filter solution including derivation of the displacement signal and integration of the acceleration signal.", "title": "" }, { "docid": "neg:1840271_12", "text": "The last 2 decades witnessed a surge in empirical studies on the variables associated with achievement in higher education. A number of meta-analyses synthesized these findings. In our systematic literature review, we included 38 meta-analyses investigating 105 correlates of achievement, based on 3,330 effect sizes from almost 2 million students. We provide a list of the 105 variables, ordered by the effect size, and summary statistics for central research topics. The results highlight the close relation between social interaction in courses and achievement. Achievement is also strongly associated with the stimulation of meaningful learning by presenting information in a clear way, relating it to the students, and using conceptually demanding learning tasks. Instruction and communication technology has comparably weak effect sizes, which did not increase over time. Strong moderator effects are found for almost all instructional methods, indicating that how a method is implemented in detail strongly affects achievement. Teachers with high-achieving students invest time and effort in designing the microstructure of their courses, establish clear learning goals, and employ feedback practices. This emphasizes the importance of teacher training in higher education. Students with high achievement are characterized by high self-efficacy, high prior achievement and intelligence, conscientiousness, and the goal-directed use of learning strategies. Barring the paucity of controlled experiments and the lack of meta-analyses on recent educational innovations, the variables associated with achievement in higher education are generally well investigated and well understood. By using these findings, teachers, university administrators, and policymakers can increase the effectivity of higher education. (PsycINFO Database Record", "title": "" }, { "docid": "neg:1840271_13", "text": "In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.", "title": "" }, { "docid": "neg:1840271_14", "text": "The paper presents proposal of practical implementation simple IoT gateway based on Arduino microcontroller, dedicated to use in home IoT environment. Authors are concentrated on research of performance and security aspects of created system. By performed load tests and denial of service attack were investigated performance and capacity limits of implemented gateway.", "title": "" }, { "docid": "neg:1840271_15", "text": "Porokeratosis, a disorder of keratinisation, is clinically characterized by the presence of annular plaques with a surrounding keratotic ridge. Clinical variants include linear, disseminated superficial actinic, verrucous/hypertrophic, disseminated eruptive, palmoplantar and porokeratosis of Mibelli (one or two typical plaques with atrophic centre and guttered keratotic rim). All of these subtypes share the histological feature of a cornoid lamella, characterized by a column of 'stacked' parakeratosis with focal absence of the granular layer, and dysmaturation (prematurely keratinised cells in the upper spinous layer). In recent years, a proposed new subtype, follicular porokeratosis (FP_, has been described, in which the cornoid lamella are exclusively located in the follicular ostia. We present four new cases that showed typical histological features of FP.", "title": "" }, { "docid": "neg:1840271_16", "text": "Touchscreen-based mobile devices (TMDs) are one of the most popular and widespread kind of electronic device. Many manufacturers have published its own design principles as a guideline for developers. Each platform has specific constrains and recommendations for software development; specially in terms of user interface. Four sets of design principles from iOS, Windows Phone, Android and Tizen OS has been mapped against a set of usability heuristics for TMDs. The map shows that the TMDs usability heuristics cover almost every design pattern with the addition of two new dimensions: user experience and cognitive load. These new dimensions will be considered when updating the proposal of usability heuristics for TMDs.", "title": "" }, { "docid": "neg:1840271_17", "text": "The present study investigated the relationships between adolescents' online communication and compulsive Internet use, depression, and loneliness. The study had a 2-wave longitudinal design with an interval of 6 months. The sample consisted of 663 students, 318 male and 345 female, ages 12 to 15 years. Questionnaires were administered in a classroom setting. The results showed that instant messenger use and chatting in chat rooms were positively related to compulsive Internet use 6 months later. Moreover, in agreement with the well-known HomeNet study (R. Kraut et al., 1998), instant messenger use was positively associated with depression 6 months later. Finally, loneliness was negatively related to instant messenger use 6 months later.", "title": "" }, { "docid": "neg:1840271_18", "text": "In response to your recent publication comparing subjective effects of D9-tetrahydrocannabinol and herbal cannabis (Wachtel et al. 2002), a number of comments are necessary. The first concerns the suitability of the chosen “marijuana” to assay the issues at hand. NIDA cannabis has been previously characterized in a number of studies (Chait and Pierri 1989; Russo et al. 2002), as a crude lowgrade product (2–4% THC) containing leaves, stems and seeds, often 3 or more years old after processing, with a stale odor lacking in terpenoids. This contrasts with the more customary clinical cannabis employed by patients in Europe and North America, composed solely of unseeded flowering tops with a potency of up to 20% THC. Cannabis-based medicine extracts (CBME) (Whittle et al. 2001), employed in clinical trials in the UK (Notcutt 2002; Robson et al. 2002), are extracted from flowering tops with abundant glandular trichomes, and retain full terpenoid and flavonoid components. In the study at issue (Wachtel et al. 2002), we are informed that marijuana contained 2.11% THC, 0.30% cannabinol (CBN), and 0.05% (CBD). The concentration of the latter two cannabinoids is virtually inconsequential. Thus, we are not surprised that no differences were seen between NIDA marijuana with essentially only one cannabinoid, and pure, synthetic THC. In comparison, clinical grade cannabis and CBME customarily contain high quantities of CBD, frequently equaling the percentage of THC (Whittle et al. 2001). Carlini et al. (1974) determined that cannabis extracts produced effects “two or four times greater than that expected from their THC content, based on animal and human studies”. Similarly, Fairbairn and Pickens (1981) detected the presence of unidentified “powerful synergists” in cannabis extracts, causing 330% greater activity in mice than THC alone. The clinical contribution of other CBD and other cannabinoids, terpenoids and flavonoids to clinical cannabis effects has been espoused as an “entourage effect” (Mechoulam and Ben-Shabat 1999), and is reviewed in detail by McPartland and Russo (2001). Briefly summarized, CBD has anti-anxiety effects (Zuardi et al. 1982), anti-psychotic benefits (Zuardi et al. 1995), modulates metabolism of THC by blocking its conversion to the more psychoactive 11-hydroxy-THC (Bornheim and Grillo 1998), prevents glutamate excitotoxicity, serves as a powerful anti-oxidant (Hampson et al. 2000), and has notable anti-inflammatory and immunomodulatory effects (Malfait et al. 2000). Terpenoid cannabis components probably also contribute significantly to clinical effects of cannabis and boil at comparable temperatures to THC (McPartland and Russo 2001). Cannabis essential oil demonstrates serotonin receptor binding (Russo et al. 2000). Its terpenoids include myrcene, a potent analgesic (Rao et al. 1990) and anti-inflammatory (Lorenzetti et al. 1991), betacaryophyllene, another anti-inflammatory (Basile et al. 1988) and gastric cytoprotective (Tambe et al. 1996), limonene, a potent inhalation antidepressant and immune stimulator (Komori et al. 1995) and anti-carcinogenic (Crowell 1999), and alpha-pinene, an anti-inflammatory (Gil et al. 1989) and bronchodilator (Falk et al. 1990). Are these terpenoid effects significant? A dried sample of drug-strain cannabis buds was measured as displaying an essential oil yield of 0.8% (Ross and ElSohly 1996), or a putative 8 mg per 1000 mg cigarette. Buchbauer et al. (1993) demonstrated that 20–50 mg of essential oil in the ambient air in mouse cages produced measurable changes in behavior, serum levels, and bound to cortical cells. Similarly, Komori et al. (1995) employed a gel of citrus fragrance with limonene to produce a significant antidepressant benefit in humans, obviating the need for continued standard medication in some patients, and also improving CD4/8 immunologic ratios. These data would E. B. Russo ()) Montana Neurobehavioral Specialists, 900 North Orange Street, Missoula, MT, 59802 USA e-mail: erusso@blackfoot.net", "title": "" } ]
1840272
Mitosis Detection in Breast Cancer Histology Images with Deep Neural Networks
[ { "docid": "pos:1840272_0", "text": "A common practice to gain invariant features in object recognition models is to aggregate multiple low-level features over a small neighborhood. However, the differences between those models makes a comparison of the properties of different aggregation functions hard. Our aim is to gain insight into different functions by directly comparing them on a fixed architecture for several common object recognition tasks. Empirical results show that a maximum pooling operation significantly outperforms subsampling operations. Despite their shift-invariant properties, overlapping pooling windows are no significant improvement over non-overlapping pooling windows. By applying this knowledge, we achieve state-of-the-art error rates of 4.57% on the NORB normalized-uniform dataset and 5.6% on the NORB jittered-cluttered dataset.", "title": "" }, { "docid": "pos:1840272_1", "text": "We address a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy (EM) images. This is necessary to efficiently map 3D brain structure and connectivity. To segment biological neuron membranes, we use a special type of deep artificial neural network as a pixel classifier. The label of each pixel (membrane or nonmembrane) is predicted from raw pixel values in a square window centered on it. The input layer maps each window pixel to a neuron. It is followed by a succession of convolutional and max-pooling layers which preserve 2D information and extract features with increasing levels of abstraction. The output layer produces a calibrated probability for each class. The classifier is trained by plain gradient descent on a 512 × 512 × 30 stack with known ground truth, and tested on a stack of the same size (ground truth unknown to the authors) by the organizers of the ISBI 2012 EM Segmentation Challenge. Even without problem-specific postprocessing, our approach outperforms competing techniques by a large margin in all three considered metrics, i.e. rand error, warping error and pixel error. For pixel error, our approach is the only one outperforming a second human observer.", "title": "" } ]
[ { "docid": "neg:1840272_0", "text": "Do the languages that we speak affect how we experience the world? This question was taken up in a linguistic survey and two non-linguistic psychophysical experiments conducted in native speakers of English, Indonesian, Greek, and Spanish. All four of these languages use spatial metaphors to talk about time, but the particular metaphoric mappings between time and space vary across languages. A linguistic corpus study revealed that English and Indonesian tend to map duration onto linear distance (e.g., a long time), whereas Greek and Spanish preferentially map duration onto quantity (e.g., much time). Two psychophysical time estimation experiments were conducted to determine whether this cross-linguistic difference has implications for speakers’ temporal thinking. Performance on the psychophysical tasks reflected the relative frequencies of the ‘time as distance’ and ‘time as quantity’ metaphors in English, Indonesian, Greek, and Spanish. This was true despite the fact that the tasks used entirely nonlinguistic stimuli and responses. Results suggest that: (1.) The spatial metaphors in our native language may profoundly influence the way we mentally represent time. (2.) Language can shape even primitive, low-level mental processes such as estimating brief durations – an ability we share with babies and non-human animals.", "title": "" }, { "docid": "neg:1840272_1", "text": "Text clustering is an important area of interest in the field of Text summarization, sentiment analysis etc. There have been a lot of algorithms experimented during the past years, which have a wide range of performances. One of the most popular method used is k-means, where an initial assumption is made about k, which is the number of clusters to be generated. Now a new method is introduced where the number of clusters is found using a modified spectral bisection and then the output is given to a genetic algorithm where the final solution is obtained. Keywords— Cluster, Spectral Bisection, Genetic Algorithm, kmeans.", "title": "" }, { "docid": "neg:1840272_2", "text": "Story Point is a relative measure heavily used for agile estimation of size. The team decides how big a point is, and based on that size, determines how many points each work item is. In many organizations, the use of story points for similar features can vary from team to another, and successfully, based on the teams' sizes, skill set and relative use of this tool. But in a CMMI organization, this technique demands a degree of consistency across teams for a more streamlined approach to solution delivery. This generates a challenge for CMMI organizations to adopt Agile in software estimation and planning. In this paper, a process and methodology that guarantees relativity in software sizing while using agile story points is introduced. The proposed process and methodology are applied in a CMMI company level three on different projects. By that, the story point is used on the level of the organization, not the project. Then, the performance of sizing process is measured to show a significant improvement in sizing accuracy after adopting the agile story point in CMMI organizations. To complete the estimation cycle, an improvement in effort estimation dependent on story point is also introduced, and its performance effect is measured.", "title": "" }, { "docid": "neg:1840272_3", "text": "Localization is one of the most important capabilities for autonomous mobile agents. Markov Localization (ML), applied to dense range images, has proven to be an effective technique. But its computational and storage requirements put a large burden on robot systems, and make it difficult to update the map dynamically. In this paper we introduce a new technique, based on correlation of a sensor scan with the map, that is several orders of magnitude more efficient than M L . CBML (correlation-based ML) permits video-rate localization using dense range scans, dynamic map updates, and a more precise error model than M L . In this paper we present the basic method of CBML, and validate its efficiency and correctness in a series of experiments on an implemented mobile robot base.", "title": "" }, { "docid": "neg:1840272_4", "text": "Entity search is an emerging IR and NLP task that involves the retrieval of entities of a specific type in response to a query. We address the similar researcher search\" or the \"researcher recommendation\" problem, an instance of similar entity search\" for the academic domain. In response to a researcher name' query, the goal of a researcher recommender system is to output the list of researchers that have similar expertise as that of the queried researcher. We propose models for computing similarity between researchers based on expertise profiles extracted from their publications and academic homepages. We provide results of our models for the recommendation task on two publicly-available datasets. To the best of our knowledge, we are the first to address content-based researcher recommendation in an academic setting and demonstrate it for Computer Science via our system, ScholarSearch.", "title": "" }, { "docid": "neg:1840272_5", "text": "The author reviewed cultural competence models and cultural competence assessment instruments developed and published by nurse researchers since 1982. Both models and instruments were examined in terms of their components, theoretical backgrounds, empirical validation, and psychometric evaluation. Most models were not empirically tested; only a few models developed model-based instruments. About half of the instruments were tested with varying levels of psychometric properties. Other related issues were discussed, including the definition of cultural competence and its significance in model and instrument development, limitations of existing models and instruments, impact of cultural competence on health disparities, and further work in cultural competence research and practice.", "title": "" }, { "docid": "neg:1840272_6", "text": "The agriculture sector is the backbone of an economy which provides the basic ingredients to mankind and raw materials for industrialization. With the increasing number of the population over the world, the demand for agricultural products is also increased. In order to increase the production rate, irrigation technique should be more efficient. The irrigation techniques used till date are not in satisfactory level, especially in a developing country like Bangladesh. This paper has proposed a line follower robot for irrigation based application which may be considered as a cost-effective solution by minimizing water loss as well as an efficient system for irrigation purposes. This proposed system does not require an operator to accomplish its task. This gardening robot is completely portable and is equipped with a microcontroller, an on-board water reservoir, and an attached water pump. The area to be watered by the robot can be any field with plants, placed in a predefined path. It is capable of comparing movable objects and stationary plants to minimize water loss and finally watering them autonomously without any human intervention. The designed robot was tested and it performed nicely.", "title": "" }, { "docid": "neg:1840272_7", "text": "In recent years the data mining is data analyzing techniques that used to analyze crime data previously stored from various sources to find patterns and trends in crimes. In additional, it can be applied to increase efficiency in solving the crimes faster and also can be applied to automatically notify the crimes. However, there are many data mining techniques. In order to increase efficiency of crime detection, it is necessary to select the data mining techniques suitably. This paper reviews the literatures on various data mining applications, especially applications that applied to solve the crimes. Survey also throws light on research gaps and challenges of crime data mining. In additional to that, this paper provides insight about the data mining for finding the patterns and trends in crime to be used appropriately and to be a help for beginners in the research of crime data mining.", "title": "" }, { "docid": "neg:1840272_8", "text": "There is much current debate about the existence of mirror neurons in humans. To identify mirror neurons in the inferior frontal gyrus (IFG) of humans, we used a repetition suppression paradigm while measuring neural activity with functional magnetic resonance imaging. Subjects either executed or observed a series of actions. Here we show that in the IFG, responses were suppressed both when an executed action was followed by the same rather than a different observed action and when an observed action was followed by the same rather than a different executed action. This pattern of responses is consistent with that predicted by mirror neurons and is evidence of mirror neurons in the human IFG.", "title": "" }, { "docid": "neg:1840272_9", "text": "The Dendritic Cell Algorithm (DCA) is inspired by the function of the dendritic cells of the human immune system. In nature, dendritic cells are the intrusion detection agents of the human body, policing the tissue and organs for potential invaders in the form of pathogens. In this research, and abstract model of DC behaviour is developed and subsequently used to form an algorithm, the DCA. The abstraction process was facilitated through close collaboration with laboratorybased immunologists, who performed bespoke experiments, the results of which are used as an integral part of this algorithm. The DCA is a population based algorithm, with each agent in the system represented as an ‘artificial DC’. Each DC has the ability to combine multiple data streams and can add context to data suspected as anomalous. In this chapter the abstraction process and details of the resultant algorithm are given. The algorithm is applied to numerous intrusion detection problems in computer security including the detection of port scans and botnets, where it has produced impressive results with relatively low rates of false positives.", "title": "" }, { "docid": "neg:1840272_10", "text": "The term “Business Model”started to gain momentum in the early rise of the new economy and it is currently used both in business practice and scientific research. Under a general point of view BMs are considered as a contact point among technology, organization and strategy used to describe how an organization gets value from technology and uses it as a source of competitive advantage. Recent contributions suggest to use ontologies to define a shareable conceptualization of BM. The aim of this study is to investigate the role of BM Ontologies as a conceptual tool for the cooperation of subjects interested in achieving a common goal and operating in complex and innovative environments. This is the case for example of those contexts characterized by the deployment of e-services from multiple service providers in cross border environments. Through an extensive literature review on BM we selected the most suitable conceptual tool and studied its application to the LD-CAST project during a participatory action research activity in order to analyse the BM design process of a new organisation based on the cooperation of service providers (the Chambers of Commerce from Italy, Romania, Poland and Bulgaria) with different needs, legal constraints and cultural background.", "title": "" }, { "docid": "neg:1840272_11", "text": "Delay-tolerant networks (DTNs) rely on the mobility of nodes and their contacts to make up with the lack of continuous connectivity and, thus, enable message delivery from source to destination in a “store-carry-forward” fashion. Since message delivery consumes resource such as storage and power, some nodes may choose not to forward or carry others' messages while relying on others to deliver their locally generated messages. These kinds of selfish behaviors may hinder effective communications over DTNs. In this paper, we present an efficient incentive-compatible (IC) routing protocol (ICRP) with multiple copies for two-hop DTNs based on the algorithmic game theory. It takes both the encounter probability and transmission cost into consideration to deal with the misbehaviors of selfish nodes. Moreover, we employ the optimal sequential stopping rule and Vickrey-Clarke-Groves (VCG) auction as a strategy to select optimal relay nodes to ensure that nodes that honestly report their encounter probability and transmission cost can maximize their rewards. We attempt to find the optimal stopping time threshold adaptively based on realistic probability model and propose an algorithm to calculate the threshold. Based on this threshold, we propose a new method to select relay nodes for multicopy transmissions. To ensure that the selected relay nodes can receive their rewards securely, we develop a signature scheme based on a bilinear map to prevent the malicious nodes from tampering. Through simulations, we demonstrate that ICRP can effectively stimulate nodes to forward/carry messages and achieve higher packet delivery ratio with lower transmission cost.", "title": "" }, { "docid": "neg:1840272_12", "text": "We present efficient algorithms for the problem of contextual bandits with i.i.d. covariates, an arbitrary sequence of rewards, and an arbitrary class of policies. Our algorithm BISTRO requires d calls to the empirical risk minimization (ERM) oracle per round, where d is the number of actions. The method uses unlabeled data to make the problem computationally simple. When the ERM problem itself is computationally hard, we extend the approach by employing multiplicative approximation algorithms for the ERM. The integrality gap of the relaxation only enters in the regret bound rather than the benchmark. Finally, we show that the adversarial version of the contextual bandit problem is learnable (and efficient) whenever the full-information supervised online learning problem has a non-trivial regret guarantee (and efficient).", "title": "" }, { "docid": "neg:1840272_13", "text": "A peer-to-peer network, enabling different parties to jointly store and run computations on data while keeping the data completely private. Enigma’s computational model is based on a highly optimized version of secure multi-party computation, guaranteed by a verifiable secret-sharing scheme. For storage, we use a modified distributed hashtable for holding secret-shared data. An external blockchain is utilized as the controller of the network, manages access control, identities and serves as a tamper-proof log of events. Security deposits and fees incentivize operation, correctness and fairness of the system. Similar to Bitcoin, Enigma removes the need for a trusted third party, enabling autonomous control of personal data. For the first time, users are able to share their data with cryptographic guarantees regarding their privacy.", "title": "" }, { "docid": "neg:1840272_14", "text": "Motivated by requirements of Web 2.0 applications, a plethora of non-relational databases raised in recent years. Since it is very difficult to choose a suitable database for a specific use case, this paper evaluates the underlying techniques of NoSQL databases considering their applicability for certain requirements. These systems are compared by their data models, query possibilities, concurrency controls, partitioning and replication opportunities.", "title": "" }, { "docid": "neg:1840272_15", "text": "Benefited from cloud storage services, users can save their cost of buying expensive storage and application servers, as well as deploying and maintaining applications. Meanwhile they lost the physical control of their data. So effective methods are needed to verify the correctness of the data stored at cloud servers, which are the research issues the Provable Data Possession (PDP) faced. The most important features in PDP are: 1) supporting for public, unlimited numbers of times of verification; 2) supporting for dynamic data update; 3) efficiency of storage space and computing. In mobile cloud computing, mobile end-users also need the PDP service. However, the computing workloads and storage burden of client in existing PDP schemes are too heavy to be directly used by the resource-constrained mobile devices. To solve this problem, with the integration of the trusted computing technology, this paper proposes a novel public PDP scheme, in which the trusted third-party agent (TPA) takes over most of the calculations from the mobile end-users. By using bilinear signature and Merkle hash tree (MHT), the scheme aggregates the verification tokens of the data file into one small signature to reduce communication and storage burden. MHT is also helpful to support dynamic data update. In our framework, the mobile terminal devices only need to generate some secret keys and random numbers with the help of trusted platform model (TPM) chips, and the needed computing workload and storage space is fit for mobile devices. Our scheme realizes provable secure storage service for resource-constrained mobile devices in mobile cloud computing.", "title": "" }, { "docid": "neg:1840272_16", "text": "The commercial roll-type corona-electrostatic separators, which are currently employed for the recovery of metals and plastics from mm-size granular mixtures, are inappropriate for the processing of finely-grinded wastes. The aim of the present work is to demonstrate that a belt-type corona-electrostatic separator could be an appropriate solution for the selective sorting of conductive and non-conductive products contained in micronized wastes. The experiments are carried out on a laboratory-scale multi-functional electrostatic separator designed by the authors. The corona discharge is generated between a wire-type dual electrode and the surface of the metal belt conveyor. The distance between the wire and the belt and the applied voltage are adjusted to values that permit particles charging without having an electric wind that puts them into motion on the surface of the belt. The separation is performed in the electric field generated between a high-voltage roll-type electrode (diameter 30 mm) and the grounded belt electrode. The study is conducted according to experimental design methodology, to enable the evaluation of the effects of the various factors that affect the efficiency of the separation: position of the roll-type electrode and applied high-voltage. The conclusions of this study will serve at the optimum design of an industrial belt-type corona-electrostatic separator for the recycling of metals and plastics from waste electric and electronic equipment.", "title": "" }, { "docid": "neg:1840272_17", "text": "We describe an algorithm for converting linear support vector machines and any other arbitrary hyperplane-based linear classifiers into a set of non-overlapping rules that, unlike the original classifier, can be easily interpreted by humans. Each iteration of the rule extraction algorithm is formulated as a constrained optimization problem that is computationally inexpensive to solve. We discuss various properties of the algorithm and provide proof of convergence for two different optimization criteria We demonstrate the performance and the speed of the algorithm on linear classifiers learned from real-world datasets, including a medical dataset on detection of lung cancer from medical images. The ability to convert SVM's and other \"black-box\" classifiers into a set of human-understandable rules, is critical not only for physician acceptance, but also to reducing the regulatory barrier for medical-decision support systems based on such classifiers.", "title": "" }, { "docid": "neg:1840272_18", "text": "On 4 August 2016, DARPA conducted the final event of the Cyber Grand Challenge (CGC). The challenge in CGC was to build an autonomous system capable of playing in a capture-the-flag hacking competition. The final event pitted the systems from seven finalists against each other, with each system attempting to defend its own network services while proving vulnerabilities in other systems’ defended services. Xandra, our automated cyber reasoning system, took second place overall in the final event. Xandra placed first in security (preventing exploits), second in availability (keeping services operational and efficient), and fourth in evaluation (proving vulnerabilities in competitor services). Xandra also drew the least power of any of the competitor systems. In this article, we describe the high-level strategies applied by Xandra, their realization in Xandra’s architecture, the synergistic interplay between offense and defense, and finally, lessons learned via post-mortem analysis of the final event.", "title": "" }, { "docid": "neg:1840272_19", "text": "High-level synthesis (HLS) is an increasingly popular approach in electronic design automation (EDA) that raises the abstraction level for designing digital circuits. With the increasing complexity of embedded systems, these tools are particularly relevant in embedded systems design. In this paper, we present our evaluation of a broad selection of recent HLS tools in terms of capabilities, usability and quality of results. Even though HLS tools are still lacking some maturity, they are constantly improving and the industry is now starting to adopt them into their design flows.", "title": "" } ]
1840273
SciHadoop: Array-based query processing in Hadoop
[ { "docid": "pos:1840273_0", "text": "Various scientific computations have become so complex, and thus computation tools play an important role. In this paper, we explore the state-of-the-art framework providing high-level matrix computation primitives with MapReduce through the case study approach, and demonstrate these primitives with different computation engines to show the performance and scalability. We believe the opportunity for using MapReduce in scientific computation is even more promising than the success to date in the parallel systems literature.", "title": "" }, { "docid": "pos:1840273_1", "text": "SciDB [4, 3] is a new open-source data management system intended primarily for use in application domains that involve very large (petabyte) scale array data; for example, scientific applications such as astronomy, remote sensing and climate modeling, bio-science information management, risk management systems in financial applications, and the analysis of web log data. In this talk we will describe our set of motivating examples and use them to explain the features of SciDB. We then briefly give an overview of the project 'in flight', explaining our novel storage manager, array data model, query language, and extensibility frameworks.", "title": "" } ]
[ { "docid": "neg:1840273_0", "text": "Knowledge graph embedding aims to construct a low-dimensional and continuous space, which is able to describe the semantics of high-dimensional and sparse knowledge graphs. Among existing solutions, translation models have drawn much attention lately, which use a relation vector to translate the head entity vector, the result of which is close to the tail entity vector. Compared with classical embedding methods, translation models achieve the state-of-the-art performance; nonetheless, the rationale and mechanism behind them still aspire after understanding and investigation. In this connection, we quest into the essence of translation models, and present a generic model, namely, GTrans, to entail all the existing translation models. In GTrans, each entity is interpreted by a combination of two states—eigenstate and mimesis. Eigenstate represents the features that an entity intrinsically owns, and mimesis expresses the features that are affected by associated relations. The weighting of the two states can be tuned, and hence, dynamic and static weighting strategies are put forward to best describe entities in the problem domain. Besides, GTrans incorporates a dynamic relation space for each relation, which not only enables the flexibility of our model but also reduces the noise from other relation spaces. In experiments, we evaluate our proposed model with two benchmark tasks—triplets classification and link prediction. Experiment results witness significant and consistent performance gain that is offered by GTrans over existing alternatives.", "title": "" }, { "docid": "neg:1840273_1", "text": "This study examined the relationships between attachment styles, drama viewing, parasocial interaction, and romantic beliefs. A survey of students revealed that drama viewing was weakly but negatively related to romantic beliefs, controlling for parasocial interaction and attachment styles. Instead, parasocial interaction mediated the effect of drama viewing on romantic beliefs: Those who viewed television dramas more heavily reported higher levels of parasocial interaction, which lead to stronger romantic beliefs. Regarding the attachment styles, anxiety was positively related to drama viewing and parasocial interaction, while avoidance was negatively related to parasocial interaction and romantic beliefs. These findings indicate that attachment styles and parasocial interaction are important for the association of drama viewing and romantic beliefs.", "title": "" }, { "docid": "neg:1840273_2", "text": "Predicting the location of a video based on its content is a very meaningful, yet very challenging problem. Most existing work has focused on developing representative visual features and then searching for visually nearest neighbors in the development set to achieve a prediction. Interestingly, the relationship between scenes has been overlooked in prior work. Two scenes that are visually different, but frequently co-occur in same location, should naturally be considered similar for the geotagging problem. To build upon the above ideas, we propose to model the geo-spatial distributions of scenes by Gaussian Mixture Models (GMMs) and measure the distribution similarity by the Jensen-Shannon divergence (JSD). Subsequently, we present the Spatial Relationship Model (SRM) for geotagging which integrates the geo-spatial relationship of scenes into a hierarchical framework. We segment the Earth's surface into multiple levels of grids and measure the likelihood of input videos with an adaptation to region granularities. We have evaluated our approach using the YFCC100M dataset in the context of the MediaEval 2014 placing task. The total set of 35,000 geotagged videos is further divided into a training set of 25,000 videos and a test set of 10,000 videos. Our experimental results demonstrate the effectiveness of our proposed framework, as our solution achieves good accuracy and outperforms existing visual approaches for video geotagging.", "title": "" }, { "docid": "neg:1840273_3", "text": "This paper illustrates the development of the automatic measurement and control system (AMCS) based on control area network (CAN) bus for the engine electronic control unit (ECU). The system composes of the ECU, test ECU (TECU), simulation ECU (SIMU), usb-CAN communication module (CRU ) and AMCS software platform, which is applied for engine control, data collection, transmission, engine signals simulation and the data conversion from CAN to USB and master control simulation and measurement. The AMCS platform software is designed with VC++ 6.0.This system has been applied in the development of engine ECU widely.", "title": "" }, { "docid": "neg:1840273_4", "text": "Economics can be distinguished from other social sciences by the belief that most (all?) behavior can be explained by assuming that agents have stable, well-defined preferences and make rational choices consistent with those preferences in markets that (eventually) clear. An empirical result qualifies as an anomaly if it is difficult to \"rationalize,\" or if implausible assumptions are necessary to explain it within the paradigm. This column presents a series of such anomalies. Readers are invited to suggest topics for future columns by sending a note with some reference to (or better yet copies oO the relevant research. Comments on anomalies printed here are also welcome. The address is: Richard Thaler, c/o Journal of Economic Perspectives, Johnson Graduate School of Management, Malott Hall, Cornell University, Ithaca, NY 14853. After this issue, the \"Anomalies\" column will no longer appear in every issue and instead will appear occasionally, when a pressing anomaly crosses Dick Thaler's desk. However, suggestions for new columns and comments on old ones are still welcome. Thaler would like to quash one rumor before it gets started, namely that he is cutting back because he has run out of anomalies. Au contraire, it is the dilemma of choosing which juicy anomaly to discuss that lakes so much time.", "title": "" }, { "docid": "neg:1840273_5", "text": "We propose a novel probabilistic model for visual question answering (Visual QA). The key idea is to infer two sets of embeddings: one for the image and the question jointly and the other for the answers. The learning objective is to learn the best parameterization of those embeddings such that the correct answer has higher likelihood among all possible answers. In contrast to several existing approaches of treating Visual QA as multi-way classification, the proposed approach takes the semantic relationships (as characterized by the embeddings) among answers into consideration, instead of viewing them as independent ordinal numbers. Thus, the learned embedded function can be used to embed unseen answers (in the training dataset). These properties make the approach particularly appealing for transfer learning for open-ended Visual QA, where the source dataset on which the model is learned has limited overlapping with the target dataset in the space of answers. We have also developed large-scale optimization techniques for applying the model to datasets with a large number of answers, where the challenge is to properly normalize the proposed probabilistic models. We validate our approach on several Visual QA datasets and investigate its utility for transferring models across datasets. The empirical results have shown that the approach performs well not only on in-domain learning but also on transfer learning.", "title": "" }, { "docid": "neg:1840273_6", "text": "Data deduplication systems detect redundancies between data blocks to either reduce storage needs or to reduce network traffic. A class of deduplication systems splits the data stream into data blocks (chunks) and then finds exact duplicates of these blocks.\n This paper compares the influence of different chunking approaches on multiple levels. On a macroscopic level, we compare the chunking approaches based on real-life user data in a weekly full backup scenario, both at a single point in time as well as over several weeks.\n In addition, we analyze how small changes affect the deduplication ratio for different file types on a microscopic level for chunking approaches and delta encoding. An intuitive assumption is that small semantic changes on documents cause only small modifications in the binary representation of files, which would imply a high ratio of deduplication. We will show that this assumption is not valid for many important file types and that application-specific chunking can help to further decrease storage capacity demands.", "title": "" }, { "docid": "neg:1840273_7", "text": "A robust approach to solving linear optimization problems with uncertain data was proposed in the early 1970s and has recently been extensively studied and extended. Under this approach, we are willing to accept a suboptimal solution for the nominal values of the data in order to ensure that the solution remains feasible and near optimal when the data changes. A concern with such an approach is that it might be too conservative. In this paper, we propose an approach that attempts to make this trade-off more attractive; that is, we investigate ways to decrease what we call the price of robustness. In particular, we flexibly adjust the level of conservatism of the robust solutions in terms of probabilistic bounds of constraint violations. An attractive aspect of our method is that the new robust formulation is also a linear optimization problem. Thus we naturally extend our methods to discrete optimization problems in a tractable way. We report numerical results for a portfolio optimization problem, a knapsack problem, and a problem from the Net Lib library.", "title": "" }, { "docid": "neg:1840273_8", "text": "We address the problem of joint detection and segmentation of multiple object instances in an image, a key step towards scene understanding. Inspired by data-driven methods, we propose an exemplar-based approach to the task of instance segmentation, in which a set of reference image/shape masks is used to find multiple objects. We design a novel CRF framework that jointly models object appearance, shape deformation, and object occlusion. To tackle the challenging MAP inference problem, we derive an alternating procedure that interleaves object segmentation and shape/appearance adaptation. We evaluate our method on two datasets with instance labels and show promising results.", "title": "" }, { "docid": "neg:1840273_9", "text": "A fundamental property of many plasma-membrane proteins is their association with the underlying cytoskeleton to determine cell shape, and to participate in adhesion, motility and other plasma-membrane processes, including endocytosis and exocytosis. The ezrin–radixin–moesin (ERM) proteins are crucial components that provide a regulated linkage between membrane proteins and the cortical cytoskeleton, and also participate in signal-transduction pathways. The closely related tumour suppressor merlin shares many properties with ERM proteins, yet also provides a distinct and essential function.", "title": "" }, { "docid": "neg:1840273_10", "text": "Cyberbotics Ltd. develops Webots, a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. Webots lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. Webots has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd.", "title": "" }, { "docid": "neg:1840273_11", "text": "Autonomous navigation has become an increasingly popular machine learning application. Recent advances in deep learning have also brought huge improvements to autonomous navigation. However, prior outdoor autonomous navigation methods depended on various expensive sensors or expensive and sometimes erroneously labeled real data. In this paper, we propose an autonomous navigation method that does not require expensive labeled real images and uses only a relatively inexpensive monocular camera. Our proposed method is based on (1) domain adaptation with an adversarial learning framework and (2) exploiting synthetic data from a simulator. To the best of the authors’ knowledge, this is the first work to apply domain adaptation with adversarial networks to autonomous navigation. We present empirical results on navigation in outdoor courses using an unmanned aerial vehicle. The performance of our method is comparable to that of a supervised model with labeled real data, although our method does not require any label information for the real data. Our proposal includes a theoretical analysis that supports the applicability of our approach.", "title": "" }, { "docid": "neg:1840273_12", "text": "In the present day developing houses, the procedures adopted during the development of software using agile methodologies are acknowledged as a better option than the procedures followed during conventional software development due to its innate characteristics such as iterative development, rapid delivery and reduced risk. Hence, it is desirable that the software development industries should have proper planning for estimating the effort required in agile software development. The existing techniques such as expert opinion, analogy and disaggregation are mostly observed to be ad hoc and in this manner inclined to be mistaken in a number of cases. One of the various approaches for calculating effort of agile projects in an empirical way is the story point approach (SPA). This paper presents a study on analysis of prediction accuracy of estimation process executed in order to improve it using SPA. Different machine learning techniques such as decision tree, stochastic gradient boosting and random forest are considered in order to assess prediction more qualitatively. A comparative analysis of these techniques with existing techniques is also presented and analyzed in order to critically examine their performance.", "title": "" }, { "docid": "neg:1840273_13", "text": "Cuneiform tablets appertain to the oldest textual artifacts and are in extent comparable to texts written in Latin or ancient Greek. The Cuneiform Commentaries Project (CPP) from Yale University provides tracings of cuneiform tablets with annotated transliterations and translations. As a part of our work analyzing cuneiform script computationally with 3D-acquisition and word-spotting, we present a first approach for automatized learning of transliterations of cuneiform tablets based on a corpus of parallel lines. These consist of manually drawn cuneiform characters and their transliteration into an alphanumeric code. Since the Cuneiform script is only available as raster-data, we segment lines with a projection profile, extract Histogram of oriented Gradients (HoG) features, detect outliers caused by tablet damage, and align those features with the transliteration. We apply methods from part-of-speech tagging to learn a correspondence between features and transliteration tokens. We evaluate point-wise classification with K-Nearest Neighbors (KNN) and a Support Vector Machine (SVM); sequence classification with a Hidden Markov Model (HMM) and a Structured Support Vector Machine (SVM-HMM). Analyzing our findings, we reach the conclusion that the sparsity of data, inconsistent labeling and the variety of tracing styles do currently not allow for fully automatized transliterations with the presented approach. However, the pursuit of automated learning of transliterations is of great relevance as manual annotation in larger quantities is not viable, given the few experts capable of transcribing cuneiform tablets.", "title": "" }, { "docid": "neg:1840273_14", "text": "The resource-based view can be positioned relative to at least three theoretical traditions: SCPbased theories of industry determinants of firm performance, neo-classical microeconomics, and evolutionary economics. In the 1991 article, only the first of these ways of positioning the resourcebased view is explored. This article briefly discusses some of the implications of positioning the resource-based view relative to these other two literatures; it also discusses some of the empirical implications of each of these different resource-based theories. © 2001 Elsevier Science Inc. All rights reserved.", "title": "" }, { "docid": "neg:1840273_15", "text": "Previous studies have shown that there is a non-trivial amount of duplication in source code. This paper analyzes a corpus of 4.5 million non-fork projects hosted on GitHub representing over 428 million files written in Java, C++, Python, and JavaScript. We found that this corpus has a mere 85 million unique files. In other words, 70% of the code on GitHub consists of clones of previously created files. There is considerable variation between language ecosystems. JavaScript has the highest rate of file duplication, only 6% of the files are distinct. Java, on the other hand, has the least duplication, 60% of files are distinct. Lastly, a project-level analysis shows that between 9% and 31% of the projects contain at least 80% of files that can be found elsewhere. These rates of duplication have implications for systems built on open source software as well as for researchers interested in analyzing large code bases. As a concrete artifact of this study, we have created DéjàVu, a publicly available map of code duplicates in GitHub repositories.", "title": "" }, { "docid": "neg:1840273_16", "text": "This paper defines and explores a somewhat different type of genetic algorithm (GA) a messy genetic algorithm (mGA). Messy GAs process variable-length strings that may be either underor overspecified with respect to the problem being solved . As nature has formed its genotypes by progressing from simple to more complex life forms, messy GAs solve problems by combining relatively short, well-tested building blocks to form longer, more complex strings that increasingly cover all features of a problem. This approach stands in contrast to the usual fixed-length, fixed-coding genetic algorithm, where the existence of the requisite tight linkage is taken for granted or ignored altogether. To compare the two approaches, a 3D-bit, orderthree-deceptive problem is searched using a simple GA and a messy GA. Using a random but fixed ordering of the bits, the simple GA makes errors at roughly three-quarters of its positions; under a worstcase ordering, the simple GA errs at all positions. In contrast to the simple GA results, the messy GA repeatedly solves the same problem to optimality. Prior to this time, no GA had ever solved a provably difficult problem to optimality without prior knowledge of good string arrangements. The mGA presented herein repeatedly achieves globally optimal results without such knowledge, and it does so at the very first generation in which strings are long enough to cover the problem. The solution of a difficult nonlinear problem to optimality suggests that messy GAs can solve more difficult problems than has been possible to date with other genetic algorithms. The ramifications of these techniques in search and machine learning are explored, including the possibility of messy floating-point codes, messy permutations, and messy classifiers. © 1989 Complex Systems Publications , Inc. 494 David E. Goldberg, Bradley Kotb, an d Kalyanmoy Deb", "title": "" }, { "docid": "neg:1840273_17", "text": "Data mining applications are becoming a more common tool in understanding and solving educational and administrative problems in higher education. In general, research in educational mining focuses on modeling student's performance instead of instructors' performance. One of the common tools to evaluate instructors' performance is the course evaluation questionnaire to evaluate based on students' perception. In this paper, four different classification techniques - decision tree algorithms, support vector machines, artificial neural networks, and discriminant analysis - are used to build classifier models. Their performances are compared over a data set composed of responses of students to a real course evaluation questionnaire using accuracy, precision, recall, and specificity performance metrics. Although all the classifier models show comparably high classification performances, C5.0 classifier is the best with respect to accuracy, precision, and specificity. In addition, an analysis of the variable importance for each classifier model is done. Accordingly, it is shown that many of the questions in the course evaluation questionnaire appear to be irrelevant. Furthermore, the analysis shows that the instructors' success based on the students' perception mainly depends on the interest of the students in the course. The findings of this paper indicate the effectiveness and expressiveness of data mining models in course evaluation and higher education mining. Moreover, these findings may be used to improve the measurement instruments.", "title": "" }, { "docid": "neg:1840273_18", "text": "T free-riding problem occurs if the presales activities needed to sell a product can be conducted separately from the actual sale of the product. Intuitively, free riding should hurt the retailer that provides that service, but the author shows analytically that free riding benefits not only the free-riding retailer, but also the retailer that provides the service when customers are heterogeneous in terms of their opportunity costs for shopping. The service-providing retailer has a postservice advantage, because customers who have resolved their matching uncertainty through sales service incur zero marginal shopping cost if they purchase from the service-providing retailer rather than the free-riding retailer. Moreover, allowing free riding gives the free rider less incentive to compete with the service provider on price, because many customers eventually will switch to it due to their own free riding. In turn, this induced soft strategic response enables the service provider to charge a higher price and enjoy the strictly positive profit that otherwise would have been wiped away by head-to-head price competition. Therefore, allowing free riding can be regarded as a necessary mechanism that prevents an aggressive response from another retailer and reduces the intensity of price competition.", "title": "" }, { "docid": "neg:1840273_19", "text": "In vehicular ad hoc networks (VANETs), trust establishment among vehicles is important to secure integrity and reliability of applications. In general, trust and reliability help vehicles to collect correct and credible information from surrounding vehicles. On top of that, a secure trust model can deal with uncertainties and risk taking from unreliable information in vehicular environments. However, inaccurate, incomplete, and imprecise information collected by vehicles as well as movable/immovable obstacles have interrupting effects on VANET. In this paper, a fuzzy trust model based on experience and plausibility is proposed to secure the vehicular network. The proposed trust model executes a series of security checks to ensure the correctness of the information received from authorized vehicles. Moreover, fog nodes are adopted as a facility to evaluate the level of accuracy of event’s location. The analyses show that the proposed solution not only detects malicious attackers and faulty nodes, but also overcomes the uncertainty and imprecision of data in vehicular networks in both line of sight and non-line of sight environments.", "title": "" } ]
1840274
Sensor for Measuring Strain in Textile
[ { "docid": "pos:1840274_0", "text": "The addition of sensors to wearable computers allows them to adapt their functions to more suit the activities and situation of their wearers. A wearable sensor badge is described constructed from (hard) electronic components, which can sense perambulatory activities for context-awareness. A wearable sensor jacket is described that uses advanced knitting techniques to form (soft) fabric stretch sensors positioned to measure upper limb and body movement. Worn on-the-hip, or worn as clothing, these unobtrusive sensors supply abstract information about your current activity to your other wearable computers.", "title": "" } ]
[ { "docid": "neg:1840274_0", "text": "We propose three iterative superimposed-pilot based channel estimators for Orthogonal Frequency Division Multiplexing (OFDM) systems. Two are approximate maximum-likelihood, derived by using a Taylor expansion of the conditional probability density function of the received signal or by approximating the OFDM time signal as Gaussian, and one is minimum-mean square error. The complexity per iteration of these estimators is given by approximately O(NL2), O(N3) and O(NL), where N is the number of OFDM subcarriers and L is the channel length (time). Two direct (non-iterative) data detectors are also derived by averaging the log likelihood function over the channel statistics. These detectors require minimising the cost metric in an integer space, and we suggest the use of the sphere decoder for them. The Cramér--Rao bound for superimposed pilot based channel estimation is derived, and this bound is achieved by the proposed estimators. The optimal pilot placement is shown to be the equally spaced distribution of pilots. The bit error rate of the proposed estimators is simulated for N = 32 OFDM system. Our estimators perform fairly close to a separated training scheme, but without any loss of spectral efficiency. Copyright © 2011 John Wiley & Sons, Ltd. *Correspondence Chintha Tellambura, Department of Electrical and Computer Engineering, University Alberta, Edmonton, Alberta, Canada T6G 2C5. E-mail: chintha@ece.ualberta.ca Received 20 July 2009; Revised 23 July 2010; Accepted 13 October 2010", "title": "" }, { "docid": "neg:1840274_1", "text": "Deep intracranial tumor removal can be achieved if the neurosurgical robot has sufficient flexibility and stability. Toward achieving this goal, we have developed a spring-based continuum robot, namely a minimally invasive neurosurgical intracranial robot (MINIR-II) with novel tendon routing and tunable stiffness for use in a magnetic resonance imaging (MRI) environment. The robot consists of a pair of springs in parallel, i.e., an inner interconnected spring that promotes flexibility with decoupled segment motion and an outer spring that maintains its smooth curved shape during its interaction with the tissue. We propose a shape memory alloy (SMA) spring backbone that provides local stiffness control and a tendon routing configuration that enables independent segment locking. In this paper, we also present a detailed local stiffness analysis of the SMA backbone and model the relationship between the resistive force at the robot tip and the tension in the tendon. We also demonstrate through experiments, the validity of our local stiffness model of the SMA backbone and the correlation between the tendon tension and the resistive force. We also performed MRI compatibility studies of the three-segment MINIR-II robot by attaching it to a robotic platform that consists of SMA spring actuators with integrated water cooling modules.", "title": "" }, { "docid": "neg:1840274_2", "text": "Chiu proposed a clustering algorithm adjusting the numeric feature weights automatically for k-anonymity implementation and this approach gave a better clustering quality over the traditional generalization and suppression methods. In this paper, we propose an improved weighted-feature clustering algorithm which takes the weight of categorical attributes and the thesis of optimal k-partition into consideration. To show the effectiveness of our method, we do some information loss experiments to compare it with greedy k-member clustering algorithm.", "title": "" }, { "docid": "neg:1840274_3", "text": "Since its introduction, the orbitrap has proven to be a robust mass analyzer that can routinely deliver high resolving power and mass accuracy. Unlike conventional ion traps such as the Paul and Penning traps, the orbitrap uses only electrostatic fields to confine and to analyze injected ion populations. In addition, its relatively low cost, simple design and high space-charge capacity make it suitable for tackling complex scientific problems in which high performance is required. This review begins with a brief account of the set of inventions that led to the orbitrap, followed by a qualitative description of ion capture, ion motion in the trap and modes of detection. Various orbitrap instruments, including the commercially available linear ion trap-orbitrap hybrid mass spectrometers, are also discussed with emphasis on the different methods used to inject ions into the trap. Figures of merit such as resolving power, mass accuracy, dynamic range and sensitivity of each type of instrument are compared. In addition, experimental techniques that allow mass-selective manipulation of the motion of confined ions and their potential application in tandem mass spectrometry in the orbitrap are described. Finally, some specific applications are reviewed to illustrate the performance and versatility of the orbitrap mass spectrometers.", "title": "" }, { "docid": "neg:1840274_4", "text": "Recently, a small number of papers have appeared in which the authors implement stochastic search algorithms, such as evolutionary computation, to generate game content, such as levels, rules and weapons. We propose a taxonomy of such approaches, centring on what sort of content is generated, how the content is represented, and how the quality of the content is evaluated. The relation between search-based and other types of procedural content generation is described, as are some of the main research challenges in this new field. The paper ends with some successful examples of this approach.", "title": "" }, { "docid": "neg:1840274_5", "text": "The performance of rasterization-based rendering on current GPUs strongly depends on the abilities to avoid overdraw and to prevent rendering triangles smaller than the pixel size. Otherwise, the rates at which highresolution polygon models can be displayed are affected significantly. Instead of trying to build these abilities into the rasterization-based rendering pipeline, we propose an alternative rendering pipeline implementation that uses rasterization and ray-casting in every frame simultaneously to determine eye-ray intersections. To make ray-casting competitive with rasterization, we introduce a memory-efficient sample-based data structure which gives rise to an efficient ray traversal procedure. In combination with a regular model subdivision, the most optimal rendering technique can be selected at run-time for each part. For very large triangle meshes our method can outperform pure rasterization and requires a considerably smaller memory budget on the GPU. Since the proposed data structure can be constructed from any renderable surface representation, it can also be used to efficiently render isosurfaces in scalar volume fields. The compactness of the data structure allows rendering from GPU memory when alternative techniques already require exhaustive paging.", "title": "" }, { "docid": "neg:1840274_6", "text": "Neuronal power attenuation or enhancement in specific frequency bands over the sensorimotor cortex, called Event-Related Desynchronization (ERD) or Event-Related Synchronization (ERS), respectively, is a major phenomenon in brain activities involved in imaginary movement of body parts. However, it is known that the nature of motor imagery-related electroencephalogram (EEG) signals is non-stationary and highly timeand frequency-dependent spatial filter, which we call ‘non-homogeneous filter.’ We adaptively select bases of spatial filters over time and frequency. By taking both temporal and spectral features of EEGs in finding a spatial filter into account it is beneficial to be able to consider non-stationarity of EEG signals. In order to consider changes of ERD/ERS patterns over the time–frequency domain, we devise a spectrally and temporally weighted classification method via statistical analysis. Our experimental results on the BCI Competition IV dataset II-a and BCI Competition II dataset IV clearly presented the effectiveness of the proposed method outperforming other competing methods in the literature. & 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840274_7", "text": "Underwater wireless communications refer to data transmission in unguided water environment through wireless carriers, i.e., radio-frequency (RF) wave, acoustic wave, and optical wave. In comparison to RF and acoustic counterparts, underwater optical wireless communication (UOWC) can provide a much higher transmission bandwidth and much higher data rate. Therefore, we focus, in this paper, on the UOWC that employs optical wave as the transmission carrier. In recent years, many potential applications of UOWC systems have been proposed for environmental monitoring, offshore exploration, disaster precaution, and military operations. However, UOWC systems also suffer from severe absorption and scattering introduced by underwater channels. In order to overcome these technical barriers, several new system design approaches, which are different from the conventional terrestrial free-space optical communication, have been explored in recent years. We provide a comprehensive and exhaustive survey of the state-of-the-art UOWC research in three aspects: 1) channel characterization; 2) modulation; and 3) coding techniques, together with the practical implementations of UOWC.", "title": "" }, { "docid": "neg:1840274_8", "text": "In typical reinforcement learning (RL), the environment is assumed given and the goal of the learning is to identify an optimal policy for the agent taking actions through its interactions with the environment. In this paper, we extend this setting by considering the environment is not given, but controllable and learnable through its interaction with the agent at the same time. This extension is motivated by environment design scenarios in the realworld, including game design, shopping space design and traffic signal design. Theoretically, we find a dual Markov decision process (MDP) w.r.t. the environment to that w.r.t. the agent, and derive a policy gradient solution to optimizing the parametrized environment. Furthermore, discontinuous environments are addressed by a proposed general generative framework. Our experiments on a Maze game design task show the effectiveness of the proposed algorithms in generating diverse and challenging Mazes against various agent settings.", "title": "" }, { "docid": "neg:1840274_9", "text": "Image quality assessment (IQA) tries to estimate human perception based image visual quality in an objective manner. Existing approaches target this problem with or without reference images. For no-reference image quality assessment, there is no given reference image or any knowledge of the distortion type of the image. Previous approaches measure the image quality from signal level rather than semantic analysis. They typically depend on various features to represent local characteristic of an image. In this paper we propose a new no-reference (NR) image quality assessment (IQA) framework based on semantic obviousness. We discover that semantic-level factors affect human perception of image quality. With such observation, we explore semantic obviousness as a metric to perceive objects of an image. We propose to extract two types of features, one to measure the semantic obviousness of the image and the other to discover local characteristic. Then the two kinds of features are combined for image quality estimation. The principles proposed in our approach can also be incorporated with many existing IQA algorithms to boost their performance. We evaluate our approach on the LIVE dataset. Our approach is demonstrated to be superior to the existing NR-IQA algorithms and comparable to the state-of-the-art full-reference IQA (FR-IQA) methods. Cross-dataset experiments show the generalization ability of our approach.", "title": "" }, { "docid": "neg:1840274_10", "text": "Theorems and techniques to form different types of transformationally invariant processing and to produce the same output quantitatively based on either transformationally invariant operators or symmetric operations have recently been introduced by the authors. In this study, we further propose to compose a geared rotationally identical CNN system (GRI-CNN) with a small angle increment by connecting networks of participated processes at the first flatten layer. Using an ordinary CNN structure as a base, requirements for constructing a GRI-CNN include the use of either symmetric input vector or kernels with an angle increment that can form a complete cycle as a \"gearwheel\". Four basic GRI-CNN structures were studied. Each of them can produce quantitatively identical output results when a rotation angle of the input vector is evenly divisible by the increment angle of the gear. Our study showed when a rotated input vector does not match to a gear angle, the GRI-CNN can also produce a highly consistent result. With an ultrafine increment angle (e.g., 1 or 0.1), a virtually isotropic CNN system can be constructed.", "title": "" }, { "docid": "neg:1840274_11", "text": "In this experiment, it was defined a protocol of fluorescent probes combination: propidium iodide (PI), fluorescein isothiocyanate-conjugated Pisum sativum agglutinin (FITC-PSA), and JC-1. For this purpose, four ejaculates from three different rams (n=12), all showing motility 80% and abnormal morphology 10%, were diluted in TALP medium and split into two aliquots. One of the aliquots was flash frozen and thawed in three continuous cycles, to induce damage in cellular membranes and to disturb mitochondrial function. Three treatments were prepared with the following fixed ratios of fresh semen:flash frozen semen: 0:100 (T0), 50:50 (T50), and 100:0 (T100). Samples were stained in the proposal protocol and evaluated by epifluorescence microscopy. For plasmatic membrane integrity, detected by PI probe, it was obtained the equation: Ŷ=1.09+0.86X (R 2 =0.98). The intact acrosome, verified by the FITC-PSA probe, produced the equation: Ŷ=2.76+0.92X (R 2 =0.98). The high mitochondrial membrane potential, marked in red-orange by JC-1, was estimated by the equation: Ŷ=1.90+0.90X (R 2 =0.98). The resulting linear equations demonstrate that this technique is efficient and practical for the simultaneous evaluations of the plasmatic, acrosomal, and mitochondrial membranes in ram spermatozoa.", "title": "" }, { "docid": "neg:1840274_12", "text": "A key challenge for timeline summarization is to generate a concise, yet complete storyline from large collections of news stories. Previous studies in extractive timeline generation are limited in two ways: first, most prior work focuses on fully-observable ranking models or clustering models with hand-designed features that may not generalize well. Second, most summarization corpora are text-only, which means that text is the sole source of information considered in timeline summarization, and thus, the rich visual content from news images is ignored. To solve these issues, we leverage the success of matrix factorization techniques from recommender systems, and cast the problem as a sentence recommendation task, using a representation learning approach. To augment text-only corpora, for each candidate sentence in a news article, we take advantage of top-ranked relevant images from the Web and model the image using a convolutional neural network architecture. Finally, we propose a scalable low-rank approximation approach for learning joint embeddings of news stories and images. In experiments, we compare our model to various competitive baselines, and demonstrate the stateof-the-art performance of the proposed textbased and multimodal approaches.", "title": "" }, { "docid": "neg:1840274_13", "text": "This paper presents a model-checking approach for analyzing discrete-time Markov reward models. For this purpose, the temporal logic probabilistic CTL is extended with reward constraints. This allows to formulate complex measures – involving expected as well as accumulated rewards – in a precise and succinct way. Algorithms to efficiently analyze such formulae are introduced. The approach is illustrated by model-checking a probabilistic cost model of the IPv4 zeroconf protocol for distributed address assignment in ad-hoc networks.", "title": "" }, { "docid": "neg:1840274_14", "text": "In the last few years thousands of scientific papers have investigated sentiment analysis, several startups that measure opinions on real data have emerged and a number of innovative products related to this theme have been developed. There are multiple methods for measuring sentiments, including lexical-based and supervised machine learning methods. Despite the vast interest on the theme and wide popularity of some methods, it is unclear which one is better for identifying the polarity (i.e., positive or negative) of a message. Accordingly, there is a strong need to conduct a thorough apple-to-apple comparison of sentiment analysis methods, as they are used in practice, across multiple datasets originated from different data sources. Such a comparison is key for understanding the potential limitations, advantages, and disadvantages of popular methods. This article aims at filling this gap by presenting a benchmark comparison of twenty-four popular sentiment analysis methods (which we call the state-of-the-practice methods). Our evaluation is based on a benchmark of eighteen labeled datasets, covering messages posted on social networks, movie and product reviews, as well as opinions and comments in news articles. Our results highlight the extent to which the prediction performance of these methods varies considerably across datasets. Aiming at boosting the development of this research area, we open the methods’ codes and datasets used in this article, deploying them in a benchmark system, which provides an open API for accessing and comparing sentence-level sentiment analysis methods.", "title": "" }, { "docid": "neg:1840274_15", "text": "We introduce SPHINCS-Simpira, which is a variant of the SPHINCS signature scheme with Simpira as a building block. SPHINCS was proposed by Bernstein et al. at EUROCRYPT 2015 as a hash-based signature scheme with post-quantum security. At ASIACRYPT 2016, Gueron and Mouha introduced the Simpira family of cryptographic permutations, which delivers high throughput on modern 64-bit processors by using only one building block: the AES round function. The Simpira family claims security against structural distinguishers with a complexity up to 2 using classical computers. In this document, we explain why the same claim can be made against quantum computers as well. Although Simpira follows a very conservative design strategy, our benchmarks show that SPHINCS-Simpira provides a 1.5× speed-up for key generation, a 1.4× speed-up for signing 59-byte messages, and a 2.0× speed-up for verifying 59-byte messages compared to the originally proposed SPHINCS-256.", "title": "" }, { "docid": "neg:1840274_16", "text": "Risk reduction is one of the key objectives pursued by transport safety policies. Particularly, the formulation and implementation of transport safety policies needs the systematic assessment of the risks, the specification of residual risk targets and the monitoring of progresses towards those ones. Risk and safety have always been considered critical in civil aviation. The purpose of this paper is to describe and analyse safety aspects in civil airports. An increase in airport capacity usually involves changes to runways layout, route structures and traffic distribution, which in turn effect the risk level around the airport. For these reasons third party risk becomes an important issue in airports development. To avoid subjective interpretations and to increase model accuracy, risk information are colleted and evaluated in a rational and mathematical manner. The method may be used to draw risk contour maps so to provide a guide to local and national authorities, to population who live around the airport, and to airports operators. Key-Words: Risk Management, Risk assessment methodology, Safety Civil aviation.", "title": "" }, { "docid": "neg:1840274_17", "text": "Nowadays a large number of user-adaptive systems has been developed. Commonly, the effort to build user models is repeated across applications and domains, due to the lack of interoperability and synchronization among user-adaptive systems. There is a strong need for the next generation of user models to be interoperable, i.e. to be able to exchange user model portions and to use the information that has been exchanged to enrich the user experience. This paper presents an overview of the well-established literature dealing with user model interoperability, discussing the most representative work which has provided valuable solutions to face interoperability issues. Based on a detailed decomposition and a deep analysis of the selected work, we have isolated a set of dimensions characterizing the user model interoperability process along which the work has been classified. Starting from this analysis, the paper presents some open issues and possible future deployments in the area.", "title": "" }, { "docid": "neg:1840274_18", "text": "In order to implement reliable digital system, it is becoming important making tests and finding bugs by setting up a verification environment. It is possible to set up effective verification environment by using Universal Verification Methodology which is standardized and used in worldwide chip industry. In this work, the slave circuit of Serial Peripheral Interface, which is commonly used for communication integrated circuits, have been designed with hardware description and verification language SystemVerilog and creating test environment with UVM.", "title": "" }, { "docid": "neg:1840274_19", "text": "One of the most exciting but challenging endeavors in music research is to develop a computational model that comprehends the affective content of music signals and organizes a music collection according to emotion. In this paper, we propose a novel acoustic emotion Gaussians (AEG) model that defines a proper generative process of emotion perception in music. As a generative model, AEG permits easy and straightforward interpretations of the model learning processes. To bridge the acoustic feature space and music emotion space, a set of latent feature classes, which are learned from data, is introduced to perform the end-to-end semantic mappings between the two spaces. Based on the space of latent feature classes, the AEG model is applicable to both automatic music emotion annotation and emotion-based music retrieval. To gain insights into the AEG model, we also provide illustrations of the model learning process. A comprehensive performance study is conducted to demonstrate the superior accuracy of AEG over its predecessors, using two emotion annotated music corpora MER60 and MTurk. Our results show that the AEG model outperforms the state-of-the-art methods in automatic music emotion annotation. Moreover, for the first time a quantitative evaluation of emotion-based music retrieval is reported.", "title": "" } ]
1840275
A Pinch of Humor for Short-Text Conversation: An Information Retrieval Approach
[ { "docid": "pos:1840275_0", "text": "In this paper, we present our work of humor recognition on Twitter, which will facilitate affect and sentimental analysis in the social network. The central question of what makes a tweet (Twitter post) humorous drives us to design humor-related features, which are derived from influential humor theories, linguistic norms, and affective dimensions. Using machine learning techniques, we are able to recognize humorous tweets with high accuracy and F-measure. More importantly, we single out features that contribute to distinguishing non-humorous tweets from humorous tweets, and humorous tweets from other short humorous texts (non-tweets). This proves that humorous tweets possess discernible characteristics that are neither found in plain tweets nor in humorous non-tweets. We believe our novel findings will inform and inspire the burgeoning field of computational humor research in the social media.", "title": "" }, { "docid": "pos:1840275_1", "text": "Humor is one of the most interesting and puzzling aspects of human behavior. Despite the attention it has received in fields such as philosophy, linguistics, and psychology, there have been only few attempts to create computational models for humor recognition or generation. In this article, we bring empirical evidence that computational approaches can be successfully applied to the task of humor recognition. Through experiments performed on very large data sets, we show that automatic classification techniques can be effectively used to distinguish between humorous and non-humorous texts, with significant improvements observed over a priori known baselines.", "title": "" } ]
[ { "docid": "neg:1840275_0", "text": "The empirical fact that classifiers, trained on given data collections, perform poorly when tested on data acquired in different settings is theoretically explained in domain adaptation through a shift among distributions of the source and target domains. Alleviating the domain shift problem, especially in the challenging setting where no labeled data are available for the target domain, is paramount for having visual recognition systems working in the wild. As the problem stems from a shift among distributions, intuitively one should try to align them. In the literature, this has resulted in a stream of works attempting to align the feature representations learned from the source and target domains. Here we take a different route. Rather than introducing regularization terms aiming to promote the alignment of the two representations, we act at the distribution level through the introduction of DomaIn Alignment Layers (DIAL), able to match the observed source and target data distributions to a reference one. Thorough experiments on three different public benchmarks we confirm the power of our approach. ∗This work was partially supported by the ERC grant 637076 RoboExNovo (B.C.), and the CHIST-ERA project ALOOF (B.C, F. M. C.).", "title": "" }, { "docid": "neg:1840275_1", "text": "We empirically evaluate several state-of-the-art methods for constructing ensembles of heterogeneous classifiers with stacking and show that they perform (at best) comparably to selecting the best classifier from the ensemble by cross validation. Among state-of-the-art stacking methods, stacking with probability distributions and multi-response linear regression performs best. We propose two extensions of this method, one using an extended set of meta-level features and the other using multi-response model trees to learn at the meta-level. We show that the latter extension performs better than existing stacking approaches and better than selecting the best classifier by cross validation.", "title": "" }, { "docid": "neg:1840275_2", "text": "Face biometric systems are vulnerable to spoofing attacks. Such attacks can be performed in many ways, including presenting a falsified image, video or 3D mask of a valid user. A widely used approach for differentiating genuine faces from fake ones has been to capture their inherent differences in (2D or 3D) texture using local descriptors. One limitation of these methods is that they may fail if an unseen attack type, e.g. a highly realistic 3D mask which resembles real skin texture, is used in spoofing. Here we propose a robust anti-spoofing method by detecting pulse from face videos. Based on the fact that a pulse signal exists in a real living face but not in any mask or print material, the method could be a generalized solution for face liveness detection. The proposed method is evaluated first on a 3D mask spoofing database 3DMAD to demonstrate its effectiveness in detecting 3D mask attacks. More importantly, our cross-database experiment with high quality REAL-F masks shows that the pulse based method is able to detect even the previously unseen mask type whereas texture based methods fail to generalize beyond the development data. Finally, we propose a robust cascade system combining two complementary attack-specific spoof detectors, i.e. utilize pulse detection against print attacks and color texture analysis against video attacks.", "title": "" }, { "docid": "neg:1840275_3", "text": "Calcaneonavicular coalition is a congenital anomaly characterized by a connection between the calcaneus and the navicular. Surgery is required in case of chronic pain and after failure of conservative treatment. The authors present here the surgical technique and results of a 2-portals endoscopic resection of a calcaneonavicular synostosis. Both visualization and working portals must be identified with accuracy around the tarsal coalition with fluoroscopic control and according to the localization of the superficial peroneus nerve, to avoid neurologic damages during the resection. The endoscopic procedure provides a better visualization of the whole resection area and allows to achieve a complete resection and avoid plantar residual bone bar. The other important advantage of the endoscopic technique is the possibility to assess and treat in the same procedure-associated pathologies such as degenerative changes in the lateral side of the talar head with debridement and resection.", "title": "" }, { "docid": "neg:1840275_4", "text": "-This paper introduces a new image thresholding method based on minimizing the measures of fuzziness of an input image. The membership function in the thresholding method is used to denote the characteristic relationship between a pixel and its belonging region (the object or the background). In addition, based on the measure of fuzziness, a fuzzy range is defined to find the adequate threshold value within this range. The principle of the method is easy to understand and it can be directly extended to multilevel thresholding. The effectiveness of the new method is illustrated by using the test images of having various types of histograms. The experimental results indicate that the proposed method has demonstrated good performance in bilevel and trilevel thresholding. Image thresholding Measure of fuzziness Fuzzy membership function I. I N T R O D U C T I O N Image thresholding which extracts the object from the background in an input image is one of the most common applications in image analysis. For example, in automatic recognition of machine printed or handwritten texts, in shape recognition of objects, and in image enhancement, thresholding is a necessary step for image preprocessing. Among the image thresholding methods, bilevel thresholding separates the pixels of an image into two regions (i.e. the object and the background); one region contains pixels with gray values smaller than the threshold value and the other contains pixels with gray values larger than the threshold value. Further, if the pixels of an image are divided into more than two regions, this is called multilevel thresholding. In general, the threshold is located at the obvious and deep valley of the histogram. However, when the valley is not so obvious, it is very difficult to determine the threshold. During the past decade, many research studies have been devoted to the problem of selecting the appropriate threshold value. The survey of these papers can be seen in the literature31-3) Fuzzy set theory has been applied to image thresholding to partition the image space into meaningful regions by minimizing the measure of fuzziness of the image. The measurement can be expressed by terms such as entropy, {4) index of fuzziness, ~5) and index of nonfuzziness36) The \"ent ropy\" involves using Shannon's function to measure the fuzziness of an image so that the threshold can be determined by minimizing the entropy measure. It is very different from the classical entropy measure which measures t Author to whom correspondence should be addressed. probabil ist ic information. The index of fuzziness represents the average amount of fuzziness in an image by measuring the distance between the gray-level image and its near crisp (binary) version. The index of nonfuzziness indicates the average amount of nonfuzziness (crispness) in an image by taking the absolute difference between the crisp version and its complement. In addition, Pal and Rosenfeld ~7) developed an algorithm based on minimizing the compactness of fuzziness to obtain the fuzzy and nonfuzzy versions of an ill-defined image such that the appropriate nonfuzzy threshold can be chosen. They used some fuzzy geometric properties, i.e. the area and the perimeter of an fuzzy image, to obtain the measure of compactness. The effectiveness of the method has been illustrated by using two input images of bimodal and unimodal histograms. Another measurement, which is called the index of area converge (IOAC), ts) has been applied to select the threshold by finding the local minima of the IOAC. Since both the measures of compactness and the IOAC involve the spatial information of an image, they need a long time to compute the perimeter of the fuzzy plane. In this paper, based on the concept of fuzzy set, an effective thresholding method is proposed. Given a certain threshold value, the membership function of a pixel is defined by the absolute difference between the gray level and the average gray level of its belonging region (i.e. the object or the background). The larger the absolute difference is, the smaller the membership value becomes. It is expected that the membership value of each pixel in the input image is as large as possible. In addition, two measures of fuzziness are proposed to indicate the fuzziness of an image. The optimal threshold can then be effectively determined by minimizing the measure of fuzziness of an image. The performance of the proposed approach is compared", "title": "" }, { "docid": "neg:1840275_5", "text": "One standing problem in the area of web-based e-learning is how to support instructional designers to effectively and efficiently retrieve learning materials, appropriate for their educational purposes. Learning materials can be retrieved from structured repositories, such as repositories of Learning Objects and Massive Open Online Courses; they could also come from unstructured sources, such as web hypertext pages. Platforms for distance education often implement algorithms for recommending specific educational resources and personalized learning paths to students. But choosing and sequencing the adequate learning materials to build adaptive courses may reveal to be quite a challenging task. In particular, establishing the prerequisite relationships among learning objects, in terms of prior requirements needed to understand and complete before making use of the subsequent contents, is a crucial step for faculty, instructional designers or automated systems whose goal is to adapt existing learning objects to delivery in new distance courses. Nevertheless, this information is often missing. In this paper, an innovative machine learning-based approach for the identification of prerequisites between text-based resources is proposed. A feature selection methodology allows us to consider the attributes that are most relevant to the predictive modeling problem. These features are extracted from both the input material and weak-taxonomies available on the web. Input data undergoes a Natural language process that makes finding patterns of interest more easy for the applied automated analysis. Finally, the prerequisite identification is cast to a binary statistical classification task. The accuracy of the approach is validated by means of experimental evaluations on real online coursers covering different subjects.", "title": "" }, { "docid": "neg:1840275_6", "text": "Redundant collagen deposition at sites of healing dermal wounds results in hypertrophic scars. Adipose-derived stem cells (ADSCs) exhibit promise in a variety of anti-fibrosis applications by attenuating collagen deposition. The objective of this study was to explore the influence of an intralesional injection of ADSCs on hypertrophic scar formation by using an established rabbit ear model. Twelve New Zealand albino rabbits were equally divided into three groups, and six identical punch defects were made on each ear. On postoperative day 14 when all wounds were completely re-epithelialized, the first group received an intralesional injection of ADSCs on their right ears and Dulbecco’s modified Eagle’s medium (DMEM) on their left ears as an internal control. Rabbits in the second group were injected with conditioned medium of the ADSCs (ADSCs-CM) on their right ears and DMEM on their left ears as an internal control. Right ears of the third group remained untreated, and left ears received DMEM. We quantified scar hypertrophy by measuring the scar elevation index (SEI) on postoperative days 14, 21, 28, and 35 with ultrasonography. Wounds were harvested 35 days later for histomorphometric and gene expression analysis. Intralesional injections of ADSCs or ADSCs-CM both led to scars with a far more normal appearance and significantly decreased SEI (44.04 % and 32.48 %, respectively, both P <0.01) in the rabbit ears compared with their internal controls. Furthermore, we confirmed that collagen was organized more regularly and that there was a decreased expression of alpha-smooth muscle actin (α-SMA) and collagen type Ι in the ADSC- and ADSCs-CM-injected scars according to histomorphometric and real-time quantitative polymerase chain reaction analysis. There was no difference between DMEM-injected and untreated scars. An intralesional injection of ADSCs reduces the formation of rabbit ear hypertrophic scars by decreasing the α-SMA and collagen type Ι gene expression and ameliorating collagen deposition and this may result in an effective and innovative anti-scarring therapy.", "title": "" }, { "docid": "neg:1840275_7", "text": "Chemoreception is a biological process essential for the survival of animals, as it allows the recognition of important volatile cues for the detection of food, egg-laying substrates, mates, or predators, among other purposes. Furthermore, its role in pheromone detection may contribute to evolutionary processes, such as reproductive isolation and speciation. This key role in several vital biological processes makes chemoreception a particularly interesting system for studying the role of natural selection in molecular adaptation. Two major gene families are involved in the perireceptor events of the chemosensory system: the odorant-binding protein (OBP) and chemosensory protein (CSP) families. Here, we have conducted an exhaustive comparative genomic analysis of these gene families in 20 Arthropoda species. We show that the evolution of the OBP and CSP gene families is highly dynamic, with a high number of gains and losses of genes, pseudogenes, and independent origins of subfamilies. Taken together, our data clearly support the birth-and-death model for the evolution of these gene families with an overall high gene turnover rate. Moreover, we show that the genome organization of the two families is significantly more clustered than expected by chance and, more important, that this pattern appears to be actively maintained across the Drosophila phylogeny. Finally, we suggest the homologous nature of the OBP and CSP gene families, dating back their most recent common ancestor after the terrestrialization of Arthropoda (380--450 Ma) and we propose a scenario for the origin and diversification of these families.", "title": "" }, { "docid": "neg:1840275_8", "text": "Display advertising has been a significant source of revenue for publishers and ad networks in online advertising ecosystem. One important business model in online display advertising is Ad Exchange marketplace, also called non-guaranteed delivery (NGD), in which advertisers buy targeted page views and audiences on a spot market through real-time auction. In this paper, we describe a bid landscape forecasting system in NGD marketplace for any advertiser campaign specified by a variety of targeting attributes. In the system, the impressions that satisfy the campaign targeting attributes are partitioned into multiple mutually exclusive samples. Each sample is one unique combination of quantified attribute values. We develop a divide-and-conquer approach that breaks down the campaign-level forecasting problem. First, utilizing a novel star-tree data structure, we forecast the bid for each sample using non-linear regression by gradient boosting decision trees. Then we employ a mixture-of-log-normal model to generate campaign-level bid distribution based on the sample-level forecasted distributions. The experiment results of a system developed with our approach show that it can accurately forecast the bid distributions for various campaigns running on the world's largest NGD advertising exchange system, outperforming two baseline methods in term of forecasting errors.", "title": "" }, { "docid": "neg:1840275_9", "text": "This research work concerns the perceptual evaluation of the performance of information systems (IS) and more particularly, the construct of user satisfaction. Faced with the difficulty of obtaining objective measures for the success of IS, user satisfaction appeared as a substitutive measure of IS performance (DeLone & McLean, 1992). Some researchers have indeed shown that the evaluation of an IS could not happen without an analysis of the feelings and perceptions of individuals who make use of it. Consequently, the concept of satisfaction has been considered as a guarantee of the performance of an IS. Also it has become necessary to ponder the drivers of user satisfaction. The analysis of models and measurement tools for satisfaction as well as the adoption of a contingency perspective has allowed the description of principal dimensions that have a direct or less direct impact on user perceptions\n The case study of a large French group, carried out through an interpretativist approach conducted by way of 41 semi-structured interviews, allowed the conceptualization of the problematique of perceptual evaluation of IS in a particular field study. This study led us to confirm the impact of certain factors (such as perceived usefulness, participation, the quality of relations with the IS Function and its resources and also the fit of IS with user needs). On the contrary, other dimensions regarded as fundamental do not receive any consideration or see their influence nuanced in the case studied (the properties of IS, the ease of use, the quality of information). Lastly, this study has allowed for the identification of the influence of certain contingency and contextual variables on user satisfaction and, above all, for the description of the importance of interactions between the IS Function and the users", "title": "" }, { "docid": "neg:1840275_10", "text": "Growing network traffic brings huge pressure to the server cluster. Using load balancing technology in server cluster becomes the choice of most enterprises. Because of many limitations, the development of the traditional load balancing technology has encountered bottlenecks. This has forced companies to find new load balancing method. Software Defined Network (SDN) provides a good method to solve the load balancing problem. In this paper, we implemented two load balancing algorithm that based on the latest SDN network architecture. The first one is a static scheduling algorithm and the second is a dynamic scheduling algorithm. Our experiments show that the performance of the dynamic algorithm is better than the static algorithm.", "title": "" }, { "docid": "neg:1840275_11", "text": "As Cloud computing is reforming the infrastructure of IT industries, it has become one of the critical security concerns of the defensive mechanisms applied to secure Cloud environment. Even if there are tremendous advancements in defense systems regarding the confidentiality, authentication and access control, there is still a challenge to provide security against availability of associated resources. Denial-of-service (DoS) attack and distributed denial-of-service (DDoS) attack can primarily compromise availability of the system services and can be easily started by using various tools, leading to financial damage or affecting the reputation. These attacks are very difficult to detect and filter, since packets that cause the attack are very much similar to legitimate traffic. DoS attack is considered as the biggest threat to IT industry, and intensity, size and frequency of the attack are observed to be increasing every year. Therefore, there is a need for stronger and universal method to impede these attacks. In this paper, we present an overview of DoS attack and distributed DoS attack that can be carried out in Cloud environment and possible defensive mechanisms, tools and devices. In addition, we discuss many open issues and challenges in defending Cloud environment against DoS attack. This provides better understanding of the DDoS attack problem in Cloud computing environment, current solution space, and future research scope to deal with such attacks efficiently.", "title": "" }, { "docid": "neg:1840275_12", "text": "Orchid plants are the members of Orchidaceae consisting of more than 25,000 species, which are distributed almost all over the world but more abundantly in the tropics. There are 177 genera, 1,125 species of orchids that originated in Thailand. Orchid plant collected from different nurseries showing Chlorotic and mosaic symptoms were observed on Vanda plants and it was suspected to infect with virus. So the symptomatic plants were tested for Cymbidium Mosaic Virus (CYMV), Odontoglossum ring spot virus (ORSV), Poty virus and Tomato Spotted Wilt Virus (TSWV) with Direct Antigen CoatingEnzyme Linked Immunosorbent Assay (DAC-ELISA) and further confirmed by Transmission Electron Microscopy (TEM). With the two methods CYMV and ORSV were detected positively from the suspected imported samples and low positive results were observed for Potex, Poty virus and Tomato Spotted Wilt Virus (TSWV).", "title": "" }, { "docid": "neg:1840275_13", "text": "A multi-objective design procedure is applied to the design of a close-coupled inductor for a three-phase interleaved 140kW DC-DC converter. For the multi-objective optimization, a genetic algorithm is used in combination with a detailed physical model of the inductive component. From the solution of the optimization, important conclusions about the advantages and disadvantages of using close-coupled inductors compared to separate inductors can be drawn.", "title": "" }, { "docid": "neg:1840275_14", "text": "In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1-3]. Two motor pathways control facial movement [4-7]: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8-11]. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system's superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling.", "title": "" }, { "docid": "neg:1840275_15", "text": "It is proposed that using Ethernet in the fronthaul, between base station baseband unit (BBU) pools and remote radio heads (RRHs), can bring a number of advantages, from use of lower-cost equipment, shared use of infrastructure with fixed access networks, to obtaining statistical multiplexing and optimized performance through probe-based monitoring and software-defined networking. However, a number of challenges exist: ultra-high-bit-rate requirements from the transport of increased bandwidth radio streams for multiple antennas in future mobile networks, and low latency and jitter to meet delay requirements and the demands of joint processing. A new fronthaul functional division is proposed which can alleviate the most demanding bit-rate requirements by transport of baseband signals instead of sampled radio waveforms, and enable statistical multiplexing gains. Delay and synchronization issues remain to be solved.", "title": "" }, { "docid": "neg:1840275_16", "text": "It has been proposed that D-amino acid oxidase (DAO) plays an essential role in degrading D-serine, an endogenous coagonist of N-methyl-D-aspartate (NMDA) glutamate receptors. DAO shows genetic association with amyotrophic lateral sclerosis (ALS) and schizophrenia, in whose pathophysiology aberrant metabolism of D-serine is implicated. Although the pathology of both essentially involves the forebrain, in rodents, enzymatic activity of DAO is hindbrain-shifted and absent in the region. Here, we show activity-based distribution of DAO in the central nervous system (CNS) of humans compared with that of mice. DAO activity in humans was generally higher than that in mice. In the human forebrain, DAO activity was distributed in the subcortical white matter and the posterior limb of internal capsule, while it was almost undetectable in those areas in mice. In the lower brain centers, DAO activity was detected in the gray and white matters in a coordinated fashion in both humans and mice. In humans, DAO activity was prominent along the corticospinal tract, rubrospinal tract, nigrostriatal system, ponto-/olivo-cerebellar fibers, and in the anterolateral system. In contrast, in mice, the reticulospinal tract and ponto-/olivo-cerebellar fibers were the major pathways showing strong DAO activity. In the human corticospinal tract, activity-based staining of DAO did not merge with a motoneuronal marker, but colocalized mostly with excitatory amino acid transporter 2 and in part with GFAP, suggesting that DAO activity-positive cells are astrocytes seen mainly in the motor pathway. These findings establish the distribution of DAO activity in cerebral white matter and the motor system in humans, providing evidence to support the involvement of DAO in schizophrenia and ALS. Our results raise further questions about the regulation of D-serine in DAO-rich regions as well as the physiological/pathological roles of DAO in white matter astrocytes.", "title": "" }, { "docid": "neg:1840275_17", "text": "Lymphedema is a common condition frequently seen in cancer patients who have had lymph node dissection +/- radiation treatment. Traditional management is mainly non-surgical and unsatisfactory. Surgical treatment has relied on excisional techniques in the past. Physiologic operations have more recently been devised to help improve this condition. Assessing patients and deciding which of the available operations to offer them can be challenging. MRI is an extremely useful tool in patient assessment and treatment planning. J. Surg. Oncol. 2017;115:18-22. © 2016 Wiley Periodicals, Inc.", "title": "" }, { "docid": "neg:1840275_18", "text": "A soft switching three-transistor push-pull(TTPP)converter is proposed in this paper. The 3rd transistor is inserted in the primary side of a traditional push-pull converter. Two primitive transistors can achieve zero-voltage-switching (ZVS) easily under a wide load range, the 3rd transistor can also realize zero-voltage-switching assisted by leakage inductance. The rated voltage of the 3rd transistor is half of that of the main transistors. The operation theory is explained in detail. The soft-switching realization conditions are derived. An 800 W with 83.3 kHz switching frequency prototype has been built. The experimental result is provided to verify the analysis.", "title": "" } ]
1840276
Practicing Safe Computing: A Multimedia Empirical Examination of Home Computer User Security Behavioral Intentions
[ { "docid": "pos:1840276_0", "text": "This commentary discusses why most IS academic research today lacks relevance to practice and suggests tactics, procedures, and guidelines that the IS academic community might follow in their research efforts and articles to introduce relevance to practitioners. The commentary begins by defining what is meant by relevancy in the context of academic research. It then explains why there is a lack of attention to relevance within the IS scholarly literature. Next, actions that can be taken to make relevance a more central aspect of IS research and to communicate implications of IS research more effectively to IS professionals are suggested.", "title": "" } ]
[ { "docid": "neg:1840276_0", "text": "Nowadays, many consumer videos are captured by portable devices such as iPhone. Different from constrained videos that are produced by professionals, e.g., those for broadcast, summarizing multiple handheld videos from a same scenery is a challenging task. This is because: 1) these videos have dramatic semantic and style variances, making it difficult to extract the representative key frames; 2) the handheld videos are with different degrees of shakiness, but existing summarization techniques cannot alleviate this problem adaptively; and 3) it is difficult to develop a quality model that evaluates a video summary, due to the subjectiveness of video quality assessment. To solve these problems, we propose perceptual multiattribute optimization which jointly refines multiple perceptual attributes (i.e., video aesthetics, coherence, and stability) in a multivideo summarization process. In particular, a weakly supervised learning framework is designed to discover the semantically important regions in each frame. Then, a few key frames are selected based on their contributions to cover the multivideo semantics. Thereafter, a probabilistic model is proposed to dynamically fit the key frames into an aesthetically pleasing video summary, wherein its frames are stabilized adaptively. Experiments on consumer videos taken from sceneries throughout the world demonstrate the descriptiveness, aesthetics, coherence, and stability of the generated summary.", "title": "" }, { "docid": "neg:1840276_1", "text": "Facial fractures can lead to long-term sequelae if not repaired. Complications from surgical approaches can be equally detrimental to the patient. Periorbital approaches via the lower lid can lead to ectropion, entropion, scleral show, canthal malposition, and lid edema.1–6 Ectropion can cause epiphora, whereas entropion often causes pain and irritation due to contact between the cilia and cornea. Transcutaneous and tranconjunctival approaches are commonly used to address fractures of the infraorbital rim and orbital floor. The transconjunctival approach is popular among otolaryngologists and ophthalmologists, whereas transcutaneous approaches are more commonly used by oral maxillofacial surgeons and plastic surgeons.7Ridgwayet al reported in theirmeta-analysis that lid complications are highest with the subciliary approach (19.1%) and lowest with transconjunctival approach (2.1%).5 Raschke et al also found a lower incidence of lower lid malpositionvia the transconjunctival approach comparedwith the subciliary approach.8 Regardless of approach, complications occur and thefacial traumasurgeonmustknowhowtomanage these issues. In this article, we will review the common complications of lower lid surgery and their treatment.", "title": "" }, { "docid": "neg:1840276_2", "text": "Over the past two decades several attempts have been made to address the problem of face recognition and a voluminous literature has been produced. Current face recognition systems are able to perform very well in controlled environments e.g. frontal face recognition, where face images are acquired under frontal pose with strict constraints as defined in related face recognition standards. However, in unconstrained situations where a face may be captured in outdoor environments, under arbitrary illumination and large pose variations these systems fail to work. With the current focus of research to deal with these problems, much attention has been devoted in the facial feature extraction stage. Facial feature extraction is the most important step in face recognition. Several studies have been made to answer the questions like what features to use, how to describe them and several feature extraction techniques have been proposed. While many comprehensive literature reviews exist for face recognition a complete reference for different feature extraction techniques and their advantages/disadvantages with regards to a typical face recognition task in unconstrained scenarios is much needed. In this chapter we present a comprehensive review of the most relevant feature extraction techniques used in 2D face recognition and introduce a new feature extraction technique termed as Face-GLOH-signature to be used in face recognition for the first time (Sarfraz and Hellwich, 2008), which has a number of advantages over the commonly used feature descriptions in the context of unconstrained face recognition. The goal of feature extraction is to find a specific representation of the data that can highlight relevant information. This representation can be found by maximizing a criterion or can be a pre-defined representation. Usually, a face image is represented by a high dimensional vector containing pixel values (holistic representation) or a set of vectors where each vector summarizes the underlying content of a local region by using a high level 1", "title": "" }, { "docid": "neg:1840276_3", "text": "Recent progress in advanced driver assistance systems and the race towards autonomous vehicles is mainly driven by two factors: (1) increasingly sophisticated algorithms that interpret the environment around the vehicle and react accordingly, and (2) the continuous improvements of sensor technology itself. In terms of cameras, these improvements typically include higher spatial resolution, which as a consequence requires more data to be processed. The trend to add multiple cameras to cover the entire surrounding of the vehicle is not conducive in that matter. At the same time, an increasing number of special purpose algorithms need access to the sensor input data to correctly interpret the various complex situations that can occur, particularly in urban traffic. By observing those trends, it becomes clear that a key challenge for vision architectures in intelligent vehicles is to share computational resources. We believe this challenge should be faced by introducing a representation of the sensory data that provides compressed and structured access to all relevant visual content of the scene. The Stixel World discussed in this paper is such a representation. It is a medium-level model of the environment that is specifically designed to compress information about obstacles by leveraging the typical layout of outdoor traffic scenes. It has proven useful for a multi∗Corresponding author: marius.cordts@daimler.com Authors contributed equally and are listed in alphabetical order Preprint submitted to Image and Vision Computing February 14, 2017 tude of automotive vision applications, including object detection, tracking, segmentation, and mapping. In this paper, we summarize the ideas behind the model and generalize it to take into account multiple dense input streams: the image itself, stereo depth maps, and semantic class probability maps that can be generated, e.g ., by deep convolutional neural networks. Our generalization is embedded into a novel mathematical formulation for the Stixel model. We further sketch how the free parameters of the model can be learned using structured SVMs.", "title": "" }, { "docid": "neg:1840276_4", "text": "In this paper, we propose a context-aware local binary feature learning (CA-LBFL) method for face recognition. Unlike existing learning-based local face descriptors such as discriminant face descriptor (DFD) and compact binary face descriptor (CBFD) which learn each feature code individually, our CA-LBFL exploits the contextual information of adjacent bits by constraining the number of shifts from different binary bits, so that more robust information can be exploited for face representation. Given a face image, we first extract pixel difference vectors (PDV) in local patches, and learn a discriminative mapping in an unsupervised manner to project each pixel difference vector into a context-aware binary vector. Then, we perform clustering on the learned binary codes to construct a codebook, and extract a histogram feature for each face image with the learned codebook as the final representation. In order to exploit local information from different scales, we propose a context-aware local binary multi-scale feature learning (CA-LBMFL) method to jointly learn multiple projection matrices for face representation. To make the proposed methods applicable for heterogeneous face recognition, we present a coupled CA-LBFL (C-CA-LBFL) method and a coupled CA-LBMFL (C-CA-LBMFL) method to reduce the modality gap of corresponding heterogeneous faces in the feature level, respectively. Extensive experimental results on four widely used face datasets clearly show that our methods outperform most state-of-the-art face descriptors.", "title": "" }, { "docid": "neg:1840276_5", "text": "We study in this paper the rate of convergence for learning distributions with the Generative Adversarial Networks (GAN) framework, which subsumes Wasserstein, Sobolev and MMD GANs as special cases. We study a wide range of parametric and nonparametric target distributions, under a collection of objective evaluation metrics. On the nonparametric end, we investigate the minimax optimal rates and fundamental difficulty of the density estimation under the adversarial framework. On the parametric end, we establish theory for neural network classes, that characterizes the interplay between the choice of generator and discriminator. We investigate how to improve the GAN framework with better theoretical guarantee through the lens of regularization. We discover and isolate a new notion of regularization, called the generator/discriminator pair regularization, that sheds light on the advantage of GAN compared to classic parametric and nonparametric approaches for density estimation.", "title": "" }, { "docid": "neg:1840276_6", "text": "Most algorithms that rely on deep learning-based approaches to generate 3D point sets can only produce clouds containing fixed number of points. Furthermore, they typically require large networks parameterized by many weights, which makes them hard to train. In this paper, we propose an auto-encoder architecture that can both encode and decode clouds of arbitrary size and demonstrate its effectiveness at upsampling sparse point clouds. Interestingly, we can do so using less than half as many parameters as state-of-the-art architectures while still delivering better performance. We will make our code base fully available.", "title": "" }, { "docid": "neg:1840276_7", "text": "The mitogen-activated protein kinase (MAPK) network is a conserved signalling module that regulates cell fate by transducing a myriad of growth-factor signals. The ability of this network to coordinate and process a variety of inputs from different growth-factor receptors into specific biological responses is, however, still not understood. We investigated how the MAPK network brings about signal specificity in PC-12 cells, a model for neuronal differentiation. Reverse engineering by modular-response analysis uncovered topological differences in the MAPK core network dependent on whether cells were activated with epidermal or neuronal growth factor (EGF or NGF). On EGF stimulation, the network exhibited negative feedback only, whereas a positive feedback was apparent on NGF stimulation. The latter allows for bi-stable Erk activation dynamics, which were indeed observed. By rewiring these regulatory feedbacks, we were able to reverse the specific cell responses to EGF and NGF. These results show that growth factor context determines the topology of the MAPK signalling network and that the resulting dynamics govern cell fate.", "title": "" }, { "docid": "neg:1840276_8", "text": "We show that a neural approach to the task of non-factoid answer reranking can benefit from the inclusion of tried-and-tested handcrafted features. We present a novel neural network architecture based on a combination of recurrent neural networks that are used to encode questions and answers, and a multilayer perceptron. We show how this approach can be combined with additional features, in particular, the discourse features presented by Jansen et al. (2014). Our neural approach achieves state-of-the-art performance on a public dataset from Yahoo! Answers and its performance is further improved by incorporating the discourse features. Additionally, we present a new dataset of Ask Ubuntu questions where the hybrid approach also achieves good results.", "title": "" }, { "docid": "neg:1840276_9", "text": "Cancer stem cells (CSCs), or alternatively called tumor initiating cells (TICs), are a subpopulation of tumor cells, which possesses the ability to self-renew and differentiate into bulk tumor mass. An accumulating body of evidence suggests that CSCs contribute to the growth and recurrence of tumors and the resistance to chemo- and radiotherapy. CSCs achieve self-renewal through asymmetric division, in which one daughter cell retains the self-renewal ability, and the other is destined to differentiation. Recent studies revealed the mechanisms of asymmetric division in normal stem cells (NSCs) and, to a limited degree, CSCs as well. Asymmetric division initiates when a set of polarity-determining proteins mark the apical side of mother stem cells, which arranges the unequal alignment of mitotic spindle and centrosomes along the apical-basal polarity axis. This subsequently guides the recruitment of fate-determining proteins to the basal side of mother cells. Following cytokinesis, two daughter cells unequally inherit centrosomes, differentiation-promoting fate determinants, and other proteins involved in the maintenance of stemness. Modulation of asymmetric and symmetric division of CSCs may provide new strategies for dual targeting of CSCs and the bulk tumor mass. In this review, we discuss the current understanding of the mechanisms by which NSCs and CSCs achieve asymmetric division, including the functions of polarity- and fate-determining factors.", "title": "" }, { "docid": "neg:1840276_10", "text": "An open source project typically maintains an open bug repository so that bug reports from all over the world can be gathered. When a new bug report is submitted to the repository, a person, called a triager, examines whether it is a duplicate of an existing bug report. If it is, the triager marks it as DUPLICATE and the bug report is removed from consideration for further work. In the literature, there are approaches exploiting only natural language information to detect duplicate bug reports. In this paper we present a new approach that further involves execution information. In our approach, when a new bug report arrives, its natural language information and execution information are compared with those of the existing bug reports. Then, a small number of existing bug reports are suggested to the triager as the most similar bug reports to the new bug report. Finally, the triager examines the suggested bug reports to determine whether the new bug report duplicates an existing bug report. We calibrated our approach on a subset of the Eclipse bug repository and evaluated our approach on a subset of the Firefox bug repository. The experimental results show that our approach can detect 67%-93% of duplicate bug reports in the Firefox bug repository, compared to 43%-72% using natural language information alone.", "title": "" }, { "docid": "neg:1840276_11", "text": "Many task domains require robots to interpret and act upon natural language commands which are given by people and which refer to the robot’s physical surroundings. Such interpretation is known variously as the symbol grounding problem (Harnad, 1990), grounded semantics (Feldman et al., 1996) and grounded language acquisition (Nenov and Dyer, 1993, 1994). This problem is challenging because people employ diverse vocabulary and grammar, and because robots have substantial uncertainty about the nature and contents of their surroundings, making it difficult to associate the constitutive language elements (principally noun phrases and spatial relations) of the command text to elements of those surroundings. Symbolic models capture linguistic structure but have not scaled successfully to handle the diverse language produced by untrained users. Existing statistical approaches can better handle diversity, but have not to date modeled complex linguistic structure, limiting achievable accuracy. Recent hybrid approaches have addressed limitations in scaling and complexity, but have not effectively associated linguistic and perceptual features. Our framework, called Generalized Grounding Graphs (G), addresses these issues by defining a probabilistic graphical model dynamically according to the linguistic parse structure of a natural language command. This approach scales effectively, handles linguistic diversity, and enables the system to associate parts of a command with the specific objects, places, and events in the external world to which they refer. We show that robots can learn word meanings and use those learned meanings to robustly follow natural language commands produced by untrained users. We demonstrate our approach for both mobility commands (e.g. route directions like “Go down the hallway through the door”) and mobile manipulation commands (e.g. physical directives like “Pick up the pallet on the truck”) involving a variety of semi-autonomous robotic platforms, including a wheelchair, a microair vehicle, a forklift, and the Willow Garage PR2. The first two authors contributed equally to this paper. 1 ar X iv :1 71 2. 01 09 7v 1 [ cs .C L ] 2 9 N ov 2 01 7", "title": "" }, { "docid": "neg:1840276_12", "text": "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed ‘DeepLabv3’ system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.", "title": "" }, { "docid": "neg:1840276_13", "text": "Social Media Analytics is an emerging interdisciplinary research field that aims on combining, extending, and adapting methods for analysis of social media data. On the one hand it can support IS and other research disciplines to answer their research questions and on the other hand it helps to provide architectural designs as well as solution frameworks for new social media-based applications and information systems. The authors suggest that IS should contribute to this field and help to develop and process an interdisciplinary research agenda.", "title": "" }, { "docid": "neg:1840276_14", "text": "OBJECTIVE\nTo construct new size charts for all fetal limb bones.\n\n\nDESIGN\nA prospective, cross sectional study.\n\n\nSETTING\nUltrasound department of a large hospital.\n\n\nSAMPLE\n663 fetuses scanned once only for the purpose of the study at gestations between 12 and 42 weeks.\n\n\nMETHODS\nCentiles were estimated by combining separate regression models fitted to the mean and standard deviation, assuming that the measurements have a normal distribution at each gestational age.\n\n\nMAIN OUTCOME MEASURES\nDetermination of fetal limb lengths from 12 to 42 weeks of gestation.\n\n\nRESULTS\nSize charts for fetal bones (radius, ulna, humerus, tibia, fibula, femur and foot) are presented and compared with previously published data.\n\n\nCONCLUSIONS\nWe present new size charts for fetal limb bones which take into consideration the increasing variability with gestational age. We have compared these charts with other published data; the differences seen may be largely due to methodological differences. As standards for fetal head and abdominal measurements have been published from the same population, we suggest that the use of the new charts may facilitate prenatal diagnosis of skeletal dysplasias.", "title": "" }, { "docid": "neg:1840276_15", "text": "Modern GPUs with their several hundred cores and more accessible programming models are becoming attractive devices for compute-intensive applications. They are particularly well suited for applications, such as image processing, where the end result is intended to be displayed via the graphics card. One of the more versatile and powerful graphics techniques is ray tracing. However, tracing each ray of light in a scene is very computational expensive and have traditionally been preprocessed on CPUs over hours, if not days. In this paper, Nvidia’s new OptiX ray tracing engine is used to show how the power of modern graphics cards, such as the Nvidia Quadro FX 5800, can be harnessed to ray trace several scenes that represent real-life applications in real-time speeds ranging from 20.63 to 67.15 fps. Near-perfect speedup is demonstrated on dual GPUs for scenes with complex geometries. The impact on ray tracing of the recently announced Nvidia Fermi processor, is also discussed.", "title": "" }, { "docid": "neg:1840276_16", "text": "An important part of image enhancement is color constancy, which aims to make image colors invariant to illumination. In this paper the Color Dog (CD), a new learning-based global color constancy method is proposed. Instead of providing one, it corrects the other methods’ illumination estimations by reducing their scattering in the chromaticity space by using a its previously learning partition. The proposed method outperforms all other methods on most high-quality benchmark datasets. The results are presented and discussed.", "title": "" }, { "docid": "neg:1840276_17", "text": "This letter presents a wideband transformer balun with a center open stub. Since the interconnected line between two coupled-lines greatly deteriorates the performance of balun in millimeter-wave designs, the proposed center open stub provides a good solution to further optimize the balance of balun. The proposed transformer balun with center open stub has been fabricated in 90 nm CMOS technology, with a compact chip area of 0.012 mm2. The balun achieves an amplitude imbalance of less than 1 dB for a frequency band ranging from 1 to 48 GHz along with a phase imbalance of less than 5 degrees for the frequency band ranging from 2 to 47 GHz.", "title": "" }, { "docid": "neg:1840276_18", "text": "Recently, there has been a growing interest in end-to-end speech recognition that directly transcribes speech to text without any predefined alignments. In this paper, we explore the use of attention-based encoder-decoder model for Mandarin speech recognition on a voice search task. Previous attempts have shown that applying attention-based encoder-decoder to Mandarin speech recognition was quite difficult due to the logographic orthography of Mandarin, the large vocabulary and the conditional dependency of the attention model. In this paper, we use character embedding to deal with the large vocabulary. Several tricks are used for effective model training, including L2 regularization, Gaussian weight noise and frame skipping. We compare two attention mechanisms and use attention smoothing to cover long context in the attention model. Taken together, these tricks allow us to finally achieve a character error rate (CER) of 3.58% and a sentence error rate (SER) of 7.43% on the MiTV voice search dataset. While together with a trigram language model, CER and SER reach 2.81% and 5.77%, respectively.", "title": "" }, { "docid": "neg:1840276_19", "text": "The purpose of text clustering in information retrieval is to discover groups of semantically related documents. Accurate and comprehensible cluster descriptions (labels) let the user comprehend the collection’s content faster and are essential for various document browsing interfaces. The task of creating descriptive, sensible cluster labels is difficult—typical text clustering algorithms focus on optimizing proximity between documents inside a cluster and rely on keyword representation for describing discovered clusters. In the approach called Description Comes First (DCF) cluster labels are as important as document groups—DCF promotes machine discovery of comprehensible candidate cluster labels later used to discover related document groups. In this paper we describe an application of DCF to the k-Means algorithm, including results of experiments performed on the 20-newsgroups document collection. Experimental evaluation showed that DCF does not decrease the metrics used to assess the quality of document assignment and offers good cluster labels in return. The algorithm utilizes search engine’s data structures directly to scale to large document collections. Introduction Organizing unstructured collections of textual content into semantically related groups, from now on referred to as text clustering or clustering, provides unique ways of digesting large amounts of information. In the context of information retrieval and text mining, a general definition of clustering is the following: given a large set of documents, automatically discover diverse subsets of documents that share a similar topic. In typical applications input documents are first transformed into a mathematical model where each document is described by certain features. The most popular representation for text is the vector space model [Salton, 1989]. In the VSM, documents are expressed as rows in a matrix, where columns represent unique terms (features) and the intersection of a column and a row indicates the importance of a given word to the document. A model such as the VSM helps in calculation of similarity between documents (angle between document vectors) and thus facilitates application of various known (or modified) numerical clustering algorithms. While this is sufficient for many applications, problems arise when one needs to construct some representation of the discovered groups of documents—a label, a symbolic description for each cluster, something to represent the information that makes documents inside a cluster similar to each other and that would convey this information to the user. Cluster labeling problems are often present in modern text and Web mining applications with document browsing interfaces. The process of returning from the mathematical model of clusters to comprehensible, explanatory labels is difficult because text representation used for clustering rarely preserves the inflection and syntax of the original text. Clustering algorithms presented in literature usually fall back to the simplest form of cluster representation—a list of cluster’s keywords (most “central” terms in the cluster). Unfortunately, keywords are stripped from syntactical information and force the user to manually find the underlying concept which is often confusing. Motivation and Related Works The user of a retrieval system judges the clustering algorithm by what he sees in the output— clusters’ descriptions, not the final model which is usually incomprehensible for humans. The experiences with the text clustering framework Carrot (www.carrot2.org) resulted in posing a slightly different research problem (aligned with clustering but not exactly the same). We shifted the emphasis of a clustering method to providing comprehensible and accurate cluster labels in addition to discovery of document groups. We call this problem descriptive clustering: discovery of diverse groups of semantically related documents associated with a meaningful, comprehensible and compact text labels. This definition obviously leaves a great deal of freedom for interpretation because terms such as meaningful or accurate are very vague. We narrowed the set of requirements of descriptive clustering to the following ones: — comprehensibility understood as grammatical correctness (word order, inflection, agreement between words if applicable); — conciseness of labels. Phrases selected for a cluster label should minimize its total length (without sacrificing its comprehensibility); — transparency of the relationship between cluster label and cluster content, best explained by ability to answer questions as: “Why was this label selected for these documents?” and “Why is this document in a cluster labeled X?”. Little research has been done to address the requirements above. In the STC algorithm authors employed frequently recurring phrases as both document similarity feature and final cluster description [Zamir and Etzioni, 1999]. A follow-up work [Ferragina and Gulli, 2004] showed how to avoid certain STC limitations and use non-contiguous phrases (so-called approximate sentences). A different idea of ‘label-driven’ clustering appeared in clustering with committees algorithm [Pantel and Lin, 2002], where strongly associated terms related to unambiguous concepts were evaluated using semantic relationships from WordNet. We introduced the DCF approach in our previous work [Osiński and Weiss, 2005] and showed its feasibility using an algorithm called Lingo. Lingo used singular value decomposition of the term-document matrix to select good cluster labels among candidates extracted from the text (frequent phrases). The algorithm was designed to cluster results from Web search engines (short snippets and fragmented descriptions of original documents) and proved to provide diverse meaningful cluster labels. Lingo’s weak point is its limited scalability to full or even medium sized documents. In this", "title": "" } ]
1840277
Click chain model in web search
[ { "docid": "pos:1840277_0", "text": "One of the most important yet insufficiently studied issues in online advertising is the externality effect among ads: the value of an ad impression on a page is affected not just by the location that the ad is placed in, but also by the set of other ads displayed on the page. For instance, a high quality competing ad can detract users from another ad, while a low quality ad could cause the viewer to abandon the page", "title": "" } ]
[ { "docid": "neg:1840277_0", "text": "A situated ontology is a world model used as a computational resource for solving a particular set of problems. It is treated as neither a \\natural\" entity waiting to be discovered nor a purely theoretical construct. This paper describes how a semantico-pragmatic analyzer, Mikrokosmos, uses knowledge from a situated ontology as well as from language-speciic knowledge sources (lexicons and microtheory rules). Also presented are some guidelines for acquiring ontological concepts and an overview of the technology developed in the Mikrokosmos project for large-scale acquisition and maintenance of ontological databases. Tools for acquiring, maintaining, and browsing ontologies can be shared more readily than ontologies themselves. Ontological knowledge bases can be shared as computational resources if such tools provide translators between diierent representation formats. 1 A Situated Ontology World models (ontologies) in computational applications are artiicially constructed entities. They are created, not discovered. This is why so many diierent world models were suggested. Many ontologies are developed for purely theoretical purposes or without the context of a practical situation (e. Many practical knowledge-based systems, on the other hand, employ world or domain models without recognizing them as a separate knowledge source (e.g., Farwell, et al. 1993). In the eld of natural language processing (NLP) there is now a consensus that all NLP systems that seek to represent and manipulate meanings of texts need an ontology (e. In our continued eeorts to build a multilingual knowledge-based machine translation (KBMT) system using an interlingual meaning representation (e.g., Onyshkevych and Nirenburg, 1994), we have developed an ontology to facilitate natural language interpretation and generation. The central goal of the Mikrokosmos project is to develop a system that produces a comprehensive Text Meaning Representation (TMR) for an input text in any of a set of source languages. 1 Knowledge that supports this process is stored both in language-speciic knowledge sources and in an independently motivated, language-neutral ontology (e. An ontology for NLP purposes is a body of knowledge about the world (or a domain) that a) is a repository of primitive symbols used in meaning representation; b) organizes these symbols in a tangled subsumption hierarchy; and c) further interconnects these symbols using a rich system of semantic and discourse-pragmatic relations deened among the concepts. In order for such an ontology to become a computational resource for solving problems such as ambiguity and reference resolution, it must be actually constructed, not merely deened formally, as is the …", "title": "" }, { "docid": "neg:1840277_1", "text": "In developed countries, vitamin B12 (cobalamin) deficiency usually occurs in children, exclusively breastfed ones whose mothers are vegetarian, causing low body stores of vitamin B12. The haematologic manifestation of vitamin B12 deficiency is pernicious anaemia. It is a megaloblastic anaemia with high mean corpuscular volume and typical morphological features, such as hyperlobulation of the nuclei of the granulocytes. In advanced cases, neutropaenia and thrombocytopaenia can occur, simulating aplastic anaemia or leukaemia. In addition to haematological symptoms, infants may experience weakness, fatigue, failure to thrive, and irritability. Other common findings include pallor, glossitis, vomiting, diarrhoea, and icterus. Neurological symptoms may affect the central nervous system and, in severe cases, rarely cause brain atrophy. Here, we report an interesting case, a 12-month old infant, who was admitted with neurological symptoms and diagnosed with vitamin B12 deficiency.", "title": "" }, { "docid": "neg:1840277_2", "text": "We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. In addition to these face reconstruction components, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions, change gaze directions, or remove the VR goggles in realistic re-renderings. In a live setup with a source and a target actor, we apply these newly-introduced algorithmic components. We assume that the source actor is wearing a VR device, and we capture his facial expressions and eye movement in real-time. For the target video, we mimic a similar tracking process; however, we use the source input to drive the animations of the target video, thus enabling gaze-aware facial reenactment. To render the modified target video on a stereo display, we augment our capture and reconstruction process with stereo data. In the end, FaceVR produces compelling results for a variety of applications, such as gaze-aware facial reenactment, reenactment in virtual reality, removal of VR goggles, and re-targeting of somebody's gaze direction in a video conferencing call.", "title": "" }, { "docid": "neg:1840277_3", "text": "The risk of predation can have large effects on ecological communities via changes in prey behaviour, morphology and reproduction. Although prey can use a variety of sensory signals to detect predation risk, relatively little is known regarding the effects of predator acoustic cues on prey foraging behaviour. Here we show that an ecologically important marine crab species can detect sound across a range of frequencies, probably in response to particle acceleration. Further, crabs suppress their resource consumption in the presence of experimental acoustic stimuli from multiple predatory fish species, and the sign and strength of this response is similar to that elicited by water-borne chemical cues. When acoustic and chemical cues were combined, consumption differed from expectations based on independent cue effects, suggesting redundancies among cue types. These results highlight that predator acoustic cues may influence prey behaviour across a range of vertebrate and invertebrate taxa, with the potential for cascading effects on resource abundance.", "title": "" }, { "docid": "neg:1840277_4", "text": "Snoring is a common symptom of serious chronic disease known as obstructive sleep apnea (OSA). Knowledge about the location of obstruction site (VVelum, OOropharyngeal lateral walls, T-Tongue, E-Epiglottis) in the upper airways is necessary for proper surgical treatment. In this paper we propose a dual source-filter model similar to the source-filter model of speech to approximate the generation process of snore audio. The first filter models the vocal tract from lungs to the point of obstruction with white noise excitation from the lungs. The second filter models the vocal tract from the obstruction point to the lips/nose with impulse train excitation which represents vibrations at the point of obstruction. The filter coefficients are estimated using the closed and open phases of the snore beat cycle. VOTE classification is done by using SVM classifier and filter coefficients as features. The classification experiments are performed on the development set (283 snore audios) of the MUNICH-PASSAU SNORE SOUND CORPUS (MPSSC). We obtain an unweighted average recall (UAR) of 49.58%, which is higher than the INTERSPEECH-2017 snoring sub-challenge baseline technique by ∼3% (absolute).", "title": "" }, { "docid": "neg:1840277_5", "text": "The systematic review (SR) is a methodology used to find and aggregate all relevant existing evidence about a specific research question of interest. One of the activities associated with the SR process is the selection of primary studies, which is a time consuming manual task. The quality of primary study selection impacts the overall quality of SR. The goal of this paper is to propose a strategy named “Score Citation Automatic Selection” (SCAS), to automate part of the primary study selection activity. The SCAS strategy combines two different features, content and citation relationships between the studies, to make the selection activity as automated as possible. Aiming to evaluate the feasibility of our strategy, we conducted an exploratory case study to compare the accuracy of selecting primary studies manually and using the SCAS strategy. The case study shows that for three SRs published in the literature and previously conducted in a manual implementation, the average effort reduction was 58.2 % when applying the SCAS strategy to automate part of the initial selection of primary studies, and the percentage error was 12.98 %. Our case study provided confidence in our strategy, and suggested that it can reduce the effort required to select the primary studies without adversely affecting the overall results of SR.", "title": "" }, { "docid": "neg:1840277_6", "text": "Purpose – The purpose of this paper is to propose and verify that the technology acceptance model (TAM) can be employed to explain and predict the acceptance of mobile learning (M-learning); an activity in which users access learning material with their mobile devices. The study identifies two factors that account for individual differences, i.e. perceived enjoyment (PE) and perceived mobility value (PMV), to enhance the explanatory power of the model. Design/methodology/approach – An online survey was conducted to collect data. A total of 313 undergraduate and graduate students in two Taiwan universities answered the questionnaire. Most of the constructs in the model were measured using existing scales, while some measurement items were created specifically for this research. Structural equation modeling was employed to examine the fit of the data with the model by using the LISREL software. Findings – The results of the data analysis shows that the data fit the extended TAM model well. Consumers hold positive attitudes for M-learning, viewing M-learning as an efficient tool. Specifically, the results show that individual differences have a great impact on user acceptance and that the perceived enjoyment and perceived mobility can predict user intentions of using M-learning. Originality/value – There is scant research available in the literature on user acceptance of M-learning from a customer’s perspective. The present research shows that TAM can predict user acceptance of this new technology. Perceived enjoyment and perceived mobility value are antecedents of user acceptance. The model enhances our understanding of consumer motivation of using M-learning. This understanding can aid our efforts when promoting M-learning.", "title": "" }, { "docid": "neg:1840277_7", "text": "Andrew is a distributed computing environment that is a synthesis of the personal computing and timesharing paradigms. When mature, it is expected to encompass over 5,000 workstations spanning the Carnegie Mellon University campus. This paper examines the security issues that arise in such an environment and describes the mechanisms that have been developed to address them. These mechanisms include the logical and physical separation of servers and clients, support for secure communication at the remote procedure call level, a distributed authentication service, a file-protection scheme that combines access lists with UNIX mode bits, and the use of encryption as a basic building block. The paper also discusses the assumptions underlying security in Andrew and analyzes the vulnerability of the system. Usage experience reveals that resource control, particularly of workstation CPU cycles, is more important than originally anticipated and that the mechanisms available to address this issue are rudimentary.", "title": "" }, { "docid": "neg:1840277_8", "text": "Collaborative filtering is a technique for recommending documents to users based on how similar their tastes are to other users. If two users tend to agree on what they like, the system will recommend the same documents to them. The generalized vector space model of information retrieval represents a document by a vector of its similarities to all other documents. The process of collaborative filtering is nearly identical to the process of retrieval using GVSM in a matrix of user ratings. Using this observation, a model for filtering collaboratively using document content is possible.", "title": "" }, { "docid": "neg:1840277_9", "text": "A class of three-dimensional planar arrays in substrate integrated waveguide (SIW) technology is proposed, designed and demonstrated with 8 × 16 elements at 35 GHz for millimeter-wave imaging radar system applications. Endfire element is generally chosen to ensure initial high gain and broadband characteristics for the array. Fermi-TSA (tapered slot antenna) structure is used as element to reduce the beamwidth. Corrugation is introduced to reduce the resulting antenna physical width without degradation of performance. The achieved measured gain in our demonstration is about 18.4 dBi. A taper shaped air gap in the center is created to reduce the coupling between two adjacent elements. An SIW H-to-E-plane vertical interconnect is proposed in this three-dimensional architecture and optimized to connect eight 1 × 16 planar array sheets to the 1 × 8 final network. The overall architecture is exclusively fabricated by the conventional PCB process. Thus, the developed SIW feeder leads to a significant reduction in both weight and cost, compared to the metallic waveguide-based counterpart. A complete antenna structure is designed and fabricated. The planar array ensures a gain of 27 dBi with low SLL of 26 dB and beamwidth as narrow as 5.15 degrees in the E-plane and 6.20 degrees in the 45°-plane.", "title": "" }, { "docid": "neg:1840277_10", "text": "Sarcasm is a form of language in which individual convey their message in an implicit way i.e. the opposite of what is implied. Sarcasm detection is the task of predicting sarcasm in text. This is the crucial step in sentiment analysis due to inherently ambiguous nature of sarcasm. With this ambiguity, sarcasm detection has always been a difficult task, even for humans. Therefore sarcasm detection has gained importance in many Natural Language Processing applications. In this paper, we describe approaches, issues, challenges and future scopes in sarcasm detection.", "title": "" }, { "docid": "neg:1840277_11", "text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.", "title": "" }, { "docid": "neg:1840277_12", "text": "The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge the interpretability of any result derived using it. In this article, we disprove the claims that all MGR systems are affected in the same ways by these faults, and that the performances of MGR systems in GTZAN are still meaningfully comparable since they all face the same faults. We identify and analyze the contents of GTZAN, and provide a catalog of its faults. We review how GTZAN has been used in MGR research, and find few indications that its faults have been known and considered. Finally, we rigorously study the effects of its faults on evaluating five different MGR systems. The lesson is not to banish GTZAN, but to use it with consideration of its contents.", "title": "" }, { "docid": "neg:1840277_13", "text": "With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level attitudinal factors--job satisfaction, organizational commitment, and psychological empowerment--as well as organizational citizenship behaviors that have the potential to provide insights into how human resource systems influence the performance of organizational units. The results support a unit-level path model, such that department-level, high-performance work system utilization is associated with enhanced levels of job satisfaction, organizational commitment, and psychological empowerment. In turn, these attitudinal variables were found to be positively linked to enhanced organizational citizenship behaviors, which are further related to a second-order construct measuring departmental performance.", "title": "" }, { "docid": "neg:1840277_14", "text": "Iris authentication is a popular method where persons are accurately authenticated. During authentication phase the features are extracted which are unique. Iris authentication uses IR images for authentication. This proposed work uses color iris images for authentication. Experiments are performed using ten different color models. This paper is focused on performance evaluation of color models used for color iris authentication. This proposed method is more reliable which cope up with different noises of color iris images. The experiments reveals the best selection of color model used for iris authentication. The proposed method is validated on UBIRIS noisy iris database. The results demonstrate that the accuracy is 92.1%, equal error rate of 0.072 and computational time is 0.039 seconds.", "title": "" }, { "docid": "neg:1840277_15", "text": "Multiclass maps are scatterplots, multidimensional projections, or thematic geographic maps where data points have a categorical attribute in addition to two quantitative attributes. This categorical attribute is often rendered using shape or color, which does not scale when overplotting occurs. When the number of data points increases, multiclass maps must resort to data aggregation to remain readable. We present multiclass density maps: multiple 2D histograms computed for each of the category values. Multiclass density maps are meant as a building block to improve the expressiveness and scalability of multiclass map visualization. In this article, we first present a short survey of aggregated multiclass maps, mainly from cartography. We then introduce a declarative model—a simple yet expressive JSON grammar associated with visual semantics—that specifies a wide design space of visualizations for multiclass density maps. Our declarative model is expressive and can be efficiently implemented in visualization front-ends such as modern web browsers. Furthermore, it can be reconfigured dynamically to support data exploration tasks without recomputing the raw data. Finally, we demonstrate how our model can be used to reproduce examples from the past and support exploring data at scale.", "title": "" }, { "docid": "neg:1840277_16", "text": "The balance between facilitation and competition is likely to change with age due to the dynamic nature of nutrient, water and carbon cycles, and light availability during stand development. These processes have received attention in harsh, arid, semiarid and alpine ecosystems but are rarely examined in more productive communities, in mixed-species forest ecosystems or in long-term experiments spanning more than a decade. The aim of this study was to examine how inter- and intraspecific interactions between Eucalyptus globulus Labill. mixed with Acacia mearnsii de Wildeman trees changed with age and productivity in a field experiment in temperate south-eastern Australia. Spatially explicit neighbourhood indices were calculated to quantify tree interactions and used to develop growth models to examine how the tree interactions changed with time and stand productivity. Interspecific influences were usually less negative than intraspecific influences, and their difference increased with time for E. globulus and decreased with time for A. mearnsii. As a result, the growth advantages of being in a mixture increased with time for E. globulus and decreased with time for A. mearnsii. The growth advantage of being in a mixture also decreased for E. globulus with increasing stand productivity, showing that spatial as well as temporal dynamics in resource availability influenced the magnitude and direction of plant interactions.", "title": "" }, { "docid": "neg:1840277_17", "text": "PURPOSE\nThe bony skeleton serves as the scaffolding for the soft tissues of the face; however, age-related changes of bony morphology are not well defined. This study sought to compare the anatomic relationships of the facial skeleton and soft tissue structures between young and old men and women.\n\n\nMETHODS\nA retrospective review of CT scans of 100 consecutive patients imaged at Duke University Medical Center between 2004 and 2007 was performed using the Vitrea software package. The study population included 25 younger women (aged 18-30 years), 25 younger men, 25 older women (aged 55-65 years), and 25 older men. Using a standardized reference line, the distances from the anterior corneal plane to the superior orbital rim, lateral orbital rim, lower eyelid fat pad, inferior orbital rim, anterior cheek mass, and pyriform aperture were measured. Three-dimensional bony reconstructions were used to record the angular measurements of 4 bony regions: glabellar, orbital, maxillary, and pyriform aperture.\n\n\nRESULTS\nThe glabellar (p = 0.02), orbital (p = 0.0007), maxillary (p = 0.0001), and pyriform (p = 0.008) angles all decreased with age. The maxillary pyriform (p = 0.003) and infraorbital rim (p = 0.02) regressed with age. Anterior cheek mass became less prominent with age (p = 0.001), but the lower eyelid fat pad migrated anteriorly over time (p = 0.007).\n\n\nCONCLUSIONS\nThe facial skeleton appears to remodel throughout adulthood. Relative to the globe, the facial skeleton appears to rotate such that the frontal bone moves anteriorly and inferiorly while the maxilla moves posteriorly and superiorly. This rotation causes bony angles to become more acute and likely has an effect on the position of overlying soft tissues. These changes appear to be more dramatic in women.", "title": "" }, { "docid": "neg:1840277_18", "text": "Helix antenna consists of a single conductor or multi-conductor open helix-shaped. Helix antenna has a three-dimensional shape. The shape of the helix antenna resembles a spring and the diameter and the distance between the windings of a certain size. This study aimed to design a signal amplifier wifi on 2.4 GHz. Materials used in the form of the pipe, copper wire, various connectors and wireless adapters and various other components. Mmmanagal describing simulation result on helix antenna. Further tested with wirelesmon software to test the wifi signal strength. The results are based Mmanagal, radiation patterns emitted achieve Ganin: 4.5 dBi horizontal polarization, F / B: −0,41dB; rear azimuth 1200 elevation 600, 2400 MHz, R27.9 and jX impedance −430.9, Elev: 64.40 real GND: 0.50 m height, and wifi signal strength increased from 47% to 55%.", "title": "" }, { "docid": "neg:1840277_19", "text": "Compressibiity of individuai sequences by the ciam of generaihd finite-atate information-losales encoders ia investigated These encodersrpnoperateinavariabie-ratemodeasweUasaflxedrateone,nnd they aiiow for any fhite-atate acheme of variabie-iength-to-variable-ien@ coding. For every individuai hfiite aeqence x a quantity p (x) ia defined, calledthecompressibilityofx,whirhisshowntobetheasymptotieatly attainable lower bound on the compression ratio tbat cao be achieved for x by any finite-state encoder. ‘flds is demonstrated by means of a amatructivecodtngtbeoremanditsconversethat,apartfnnntheirafymptotic significance, also provide useful performance criteria for finite and practicai data-compression taaka. The proposed concept of compressibility ia aiao shown to play a role analogous to that of entropy in ciaasicai informatfon theory where onedeaia with probabilistic ensembles of aequencea ratk Manuscript received June 10, 1977; revised February 20, 1978. J. Ziv is with Bell Laboratories, Murray Hill, NJ 07974, on leave from the Department of Electrical Engineering, Techmon-Israel Institute of Technology, Halfa, Israel. A. Lempel is with Sperry Research Center, Sudbury, MA 01776, on leave from the Department of Electrical Engineer@, Technion-Israel Institute of Technology, Haifa, Israel. tium with individuai sequences. Widie the delinition of p (x) aiiows a different machine for each different sequence to be compresse4 the constructive coding theorem ieada to a universal algorithm that is aaymik toticaiiy optfmai for au sequencea.", "title": "" } ]
1840278
Exploring ROI size in deep learning based lipreading
[ { "docid": "pos:1840278_0", "text": "Traditional visual speech recognition systems consist of two stages, feature extraction and classification. Recently, several deep learning approaches have been presented which automatically extract features from the mouth images and aim to replace the feature extraction stage. However, research on joint learning of features and classification is very limited. In this work, we present an end-to-end visual speech recognition system based on Long-Short Memory (LSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and perform classification and also achieves state-of-the-art performance in visual speech classification. The model consists of two streams which extract features directly from the mouth and difference images, respectively. The temporal dynamics in each stream are modelled by an LSTM and the fusion of the two streams takes place via a Bidirectional LSTM (BLSTM). An absolute improvement of 9.7% over the base line is reported on the OuluVS2 database, and 1.5% on the CUAVE database when compared with other methods which use a similar visual front-end.", "title": "" } ]
[ { "docid": "neg:1840278_0", "text": "The Digital Bibliography and Library Project (DBLP) is a popular computer science bibliography website hosted at the University of Trier in Germany. It currently contains 2,722,212 computer science publications with additional information about the authors and conferences, journals, or books in which these are published. Although the database covers the majority of papers published in this field of research, it is still hard to browse the vast amount of textual data manually to find insights and correlations in it, in particular time-varying ones. This is also problematic if someone is merely interested in all papers of a specific topic and possible correlated scientific words which may hint at related papers. To close this gap, we propose an interactive tool which consists of two separate components, namely data analysis and data visualization. We show the benefits of our tool and explain how it might be used in a scenario where someone is confronted with the task of writing a state-of-the art report on a specific topic. We illustrate how data analysis, data visualization, and the human user supported by interaction features can work together to find insights which makes typical literature search tasks faster.", "title": "" }, { "docid": "neg:1840278_1", "text": "Article history: Received 27 March 2008 Received in revised form 2 September 2008 Accepted 20 October 2008", "title": "" }, { "docid": "neg:1840278_2", "text": "In many cooperatively breeding species, group members form a dominance hierarchy or queue to inherit the position of breeder. Models aimed at understanding individual variation in helping behavior, however, rarely take into account the effect of dominance rank on expected future reproductive success and thus the potential direct fitness costs of helping. Here we develop a kin-selection model of helping behavior in multimember groups in which only the highest ranking individual breeds. Each group member can invest in the dominant’s offspring at a cost to its own survivorship. The model predicts that lower ranked subordinates, who have a smaller probability of inheriting the group, should work harder than higher ranked subordinates. This prediction holds regardless of whether the intrinsic mortality rate of subordinates increases or decreases with rank. The prediction does not necessarily hold, however, where the costs of helping are higher for lower ranked individuals: a situation that may be common in vertebrates. The model makes two further testable predictions: that the helping effort of an individual of given rank should be lower in larger groups, and the reproductive success of dominants should be greater where group members are more closely related. Empirical evidence for these predictions is discussed. We argue that the effects of rank on stable helping effort may explain why attempts to correlate individual helping effort with relatedness in cooperatively breeding species have met with limited success.", "title": "" }, { "docid": "neg:1840278_3", "text": "This ethnographic study of 22 diverse families in the San Francisco Bay Area provides a holistic account of parents' attitudes about their children's use of technology. We found that parents from different socioeconomic classes have different values and practices around technology use, and that those values and practices reflect structural differences in their everyday lives. Calling attention to class differences in technology use challenges the prevailing practice in human-computer interaction of designing for those similar to oneself, which often privileges middle-class values and practices. By discussing the differences between these two groups and the advantages of researching both, this research highlights the benefits of explicitly engaging with socioeconomic status as a category of analysis in design.", "title": "" }, { "docid": "neg:1840278_4", "text": "Multiple automakers have in development or in production automated driving systems (ADS) that offer freeway-pilot functions. This type of ADS is typically limited to restricted-access freeways only, that is, the transition from manual to automated modes takes place only after the ramp merging process is completed manually. One major challenge to extend the automation to ramp merging is that the automated vehicle needs to incorporate and optimize long-term objectives (e.g. successful and smooth merge) when near-term actions must be safely executed. Moreover, the merging process involves interactions with other vehicles whose behaviors are sometimes hard to predict but may influence the merging vehicle's optimal actions. To tackle such a complicated control problem, we propose to apply Deep Reinforcement Learning (DRL) techniques for finding an optimal driving policy by maximizing the long-term reward in an interactive environment. Specifically, we apply a Long Short-Term Memory (LSTM) architecture to model the interactive environment, from which an internal state containing historical driving information is conveyed to a Deep Q-Network (DQN). The DQN is used to approximate the Q-function, which takes the internal state as input and generates Q-values as output for action selection. With this DRL architecture, the historical impact of interactive environment on the long-term reward can be captured and taken into account for deciding the optimal control policy. The proposed architecture has the potential to be extended and applied to other autonomous driving scenarios such as driving through a complex intersection or changing lanes under varying traffic flow conditions.", "title": "" }, { "docid": "neg:1840278_5", "text": "This paper presents a universal morphological feature schema that represents the finest distinctions in meaning that are expressed by overt, affixal inflectional morphology across languages. This schema is used to universalize data extracted from Wiktionary via a robust multidimensional table parsing algorithm and feature mapping algorithms, yielding 883,965 instantiated paradigms in 352 languages. These data are shown to be effective for training morphological analyzers, yielding significant accuracy gains when applied to Durrett and DeNero’s (2013) paradigm learning framework.", "title": "" }, { "docid": "neg:1840278_6", "text": "Leukocyte adhesion deficiency (LAD) type III is a rare syndrome characterized by severe recurrent infections, leukocytosis, and increased bleeding tendency. All integrins are normally expressed yet a defect in their activation leads to the observed clinical manifestations. Less than 20 patients have been reported world wide and the primary genetic defect was identified in some of them. Here we describe the clinical features of patients in whom a mutation in the calcium and diacylglycerol-regulated guanine nucleotide exchange factor 1 (CalDAG GEF1) was found and compare them to other cases of LAD III and to animal models harboring a mutation in the CalDAG GEF1 gene. The hallmarks of the syndrome are recurrent infections accompanied by severe bleeding episodes distinguished by osteopetrosis like bone abnormalities and neurodevelopmental defects.", "title": "" }, { "docid": "neg:1840278_7", "text": "We show that unsupervised training of latent capsule layers using only the reconstruction loss, without masking to select the correct output class, causes a loss of equivariances and other desirable capsule qualities. This implies that supervised capsules networks can’t be very deep. Unsupervised sparsening of latent capsule layer activity both restores these qualities and appears to generalize better than supervised masking, while potentially enabling deeper capsules networks. We train a sparse, unsupervised capsules network of similar geometry to (Sabour et al., 2017) on MNIST (LeCun et al., 1998) and then test classification accuracy on affNIST1 using an SVM layer. Accuracy is improved from benchmark 79% to 90%.", "title": "" }, { "docid": "neg:1840278_8", "text": "This paper compares various topologies for 6.6kW on-board charger (OBC) to find out suitable topology. In general, OBC consists of 2-stage; power factor correction (PFC) stage and DC-DC converter stage. Conventional boost PFC, interleaved boost PFC, and semi bridgeless PFC are considered as PFC circuit, and full-bridge converter, phase shift full-bridge converter, and series resonant converter are taken into account for DC-DC converter circuit. The design process of each topology is presented. Then, loss analysis is implemented in order to calculate the efficiency of each topology for PFC circuit and DC-DC converter circuit. In addition, the volume of magnetic components and number of semi-conductor elements are considered. Based on these results, topology selection guideline according to the system specification of 6.6kW OBC is proposed.", "title": "" }, { "docid": "neg:1840278_9", "text": "A large amount of multimedia data (e.g., image and video) is now available on the Web. A multimedia entity does not appear in isolation, but is accompanied by various forms of metadata, such as surrounding text, user tags, ratings, and comments etc. Mining these textual metadata has been found to be effective in facilitating multimedia information processing and management. A wealth of research efforts has been dedicated to text mining in multimedia. This chapter provides a comprehensive survey of recent research efforts. Specifically, the survey focuses on four aspects: (a) surrounding text mining; (b) tag mining; (c) joint text and visual content mining; and (d) cross text and visual content mining. Furthermore, open research issues are identified based on the current research efforts.", "title": "" }, { "docid": "neg:1840278_10", "text": "We propose to help weakly supervised object localization for classes where location annotations are not available, by transferring things and stuff knowledge from a source set with available annotations. The source and target classes might share similar appearance (e.g. bear fur is similar to cat fur) or appear against similar background (e.g. horse and sheep appear against grass). To exploit this, we acquire three types of knowledge from the source set: a segmentation model trained on both thing and stuff classes; similarity relations between target and source classes; and cooccurrence relations between thing and stuff classes in the source. The segmentation model is used to generate thing and stuff segmentation maps on a target image, while the class similarity and co-occurrence knowledge help refining them. We then incorporate these maps as new cues into a multiple instance learning framework (MIL), propagating the transferred knowledge from the pixel level to the object proposal level. In extensive experiments, we conduct our transfer from the PASCAL Context dataset (source) to the ILSVRC, COCO and PASCAL VOC 2007 datasets (targets). We evaluate our transfer across widely different thing classes, including some that are not similar in appearance, but appear against similar background. The results demonstrate significant improvement over standard MIL, and we outperform the state-of-the-art in the transfer setting.", "title": "" }, { "docid": "neg:1840278_11", "text": "The vast majority of today's critical infrastructure is supported by numerous feedback control loops and an attack on these control loops can have disastrous consequences. This is a major concern since modern control systems are becoming large and decentralized and thus more vulnerable to attacks. This paper is concerned with the estimation and control of linear systems when some of the sensors or actuators are corrupted by an attacker. We give a new simple characterization of the maximum number of attacks that can be detected and corrected as a function of the pair (A,C) of the system and we show in particular that it is impossible to accurately reconstruct the state of a system if more than half the sensors are attacked. In addition, we show how the design of a secure local control loop can improve the resilience of the system. When the number of attacks is smaller than a threshold, we propose an efficient algorithm inspired from techniques in compressed sensing to estimate the state of the plant despite attacks. We give a theoretical characterization of the performance of this algorithm and we show on numerical simulations that the method is promising and allows to reconstruct the state accurately despite attacks. Finally, we consider the problem of designing output-feedback controllers that stabilize the system despite sensor attacks. We show that a principle of separation between estimation and control holds and that the design of resilient output feedback controllers can be reduced to the design of resilient state estimators.", "title": "" }, { "docid": "neg:1840278_12", "text": "This contribution presents a very brief and critical discussion on automated machine learning (AutoML), which is categorized here into two classes, referred to as narrow AutoML and generalized AutoML, respectively. The conclusions yielded from this discussion can be summarized as follows: (1) most existent research on AutoML belongs to the class of narrow AutoML; (2) advances in narrow AutoML are mainly motivated by commercial needs, while any possible benefit obtained is definitely at a cost of increase in computing burdens; (3)the concept of generalized AutoML has a strong tie in spirit with artificial general intelligence (AGI), also called “strong AI”, for which obstacles abound for obtaining pivotal progresses.", "title": "" }, { "docid": "neg:1840278_13", "text": "We describe our early experience in applying our console log mining techniques [19, 20] to logs from production Google systems with thousands of nodes. This data set is five orders of magnitude in size and contains almost 20 times as many messages types as the Hadoop data set we used in [19]. It also has many properties that are unique to large scale production deployments (e.g., the system stays on for several months and multiple versions of the software can run concurrently). Our early experience shows that our techniques, including source code based log parsing, state and sequence based feature creation and problem detection, work well on this production data set. We also discuss our experience in using our log parser to assist the log sanitization.", "title": "" }, { "docid": "neg:1840278_14", "text": "This document defines a deterministic digital signature generation procedure. Such signatures are compatible with standard Digital Signature Algorithm (DSA) and Elliptic Curve Digital Signature Algorithm (ECDSA) digital signatures and can be processed with unmodified verifiers, which need not be aware of the procedure described therein. Deterministic signatures retain the cryptographic security features associated with digital signatures but can be more easily implemented in various environments, since they do not need access to a source of high-quality randomness. Status of This Memo This document is not an Internet Standards Track specification; it is published for informational purposes. This is a contribution to the RFC Series, independently of any other RFC stream. The RFC Editor has chosen to publish this document at its discretion and makes no statement about its value for implementation or deployment. Documents approved for publication by the RFC Editor are not a candidate for any level of Internet Standard; see Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document.", "title": "" }, { "docid": "neg:1840278_15", "text": "This paper describes EmoTweet-28, a carefully curated corpus of 15,553 tweets annotated with 28 emotion categories for the purpose of training and evaluating machine learning models for emotion classification. EmoTweet-28 is, to date, the largest tweet corpus annotated with fine-grained emotion categories. The corpus contains annotations for four facets of emotion: valence, arousal, emotion category and emotion cues. We first used small-scale content analysis to inductively identify a set of emotion categories that characterize the emotions expressed in microblog text. We then expanded the size of the corpus using crowdsourcing. The corpus encompasses a variety of examples including explicit and implicit expressions of emotions as well as tweets containing multiple emotions. EmoTweet-28 represents an important resource to advance the development and evaluation of more emotion-sensitive systems.", "title": "" }, { "docid": "neg:1840278_16", "text": "Convolutional neural networks (CNNs) are well known for producing state-of-the-art recognizers for document processing [1]. However, they can be difficult to implement and are usually slower than traditional multi-layer perceptrons (MLPs). We present three novel approaches to speeding up CNNs: a) unrolling convolution, b) using BLAS (basic linear algebra subroutines), and c) using GPUs (graphic processing units). Unrolled convolution converts the processing in each convolutional layer (both forward-propagation and back-propagation) into a matrix-matrix product. The matrix-matrix product representation of CNNs makes their implementation as easy as MLPs. BLAS is used to efficiently compute matrix products on the CPU. We also present a pixel shader based GPU implementation of CNNs. Results on character recognition problems indicate that unrolled convolution with BLAS produces a dramatic 2.4X−3.0X speedup. The GPU implementation is even faster and produces a 3.1X−4.1X speedup.", "title": "" }, { "docid": "neg:1840278_17", "text": "BACKGROUND\nIt is important to evaluate the impact of cannabis use on onset and course of psychotic illness, as the increasing number of novice cannabis users may translate into a greater public health burden. This study aims to examine the relationship between adolescent onset of regular marijuana use and age of onset of prodromal symptoms, or first episode psychosis, and the manifestation of psychotic symptoms in those adolescents who use cannabis regularly.\n\n\nMETHODS\nA review was conducted of the current literature for youth who initiated cannabis use prior to the age of 18 and experienced psychotic symptoms at, or prior to, the age of 25. Seventeen studies met eligibility criteria and were included in this review.\n\n\nRESULTS\nThe current weight of evidence supports the hypothesis that early initiation of cannabis use increases the risk of early onset psychotic disorder, especially for those with a preexisting vulnerability and who have greater severity of use. There is also a dose-response association between cannabis use and symptoms, such that those who use more tend to experience greater number and severity of prodromal and diagnostic psychotic symptoms. Those with early-onset psychotic disorder and comorbid cannabis use show a poorer course of illness in regards to psychotic symptoms, treatment, and functional outcomes. However, those with early initiation of cannabis use appear to show a higher level of social functioning than non-cannabis users.\n\n\nCONCLUSIONS\nAdolescent initiation of cannabis use is associated, in a dose-dependent fashion, with emergence and severity of psychotic symptoms and functional impairment such that those who initiate use earlier and use at higher frequencies demonstrate poorer illness and treatment outcomes. These associations appear more robust for adolescents at high risk for developing a psychotic disorder.", "title": "" }, { "docid": "neg:1840278_18", "text": "Product search is an important part of online shopping. In contrast to many search tasks, the objectives of product search are not confined to retrieving relevant products. Instead, it focuses on finding items that satisfy the needs of individuals and lead to a user purchase. The unique characteristics of product search make search personalization essential for both customers and e-shopping companies. Purchase behavior is highly personal in online shopping and users often provide rich feedback about their decisions (e.g. product reviews). However, the severe mismatch found in the language of queries, products and users make traditional retrieval models based on bag-of-words assumptions less suitable for personalization in product search. In this paper, we propose a hierarchical embedding model to learn semantic representations for entities (i.e. words, products, users and queries) from different levels with their associated language data. Our contributions are three-fold: (1) our work is one of the initial studies on personalized product search; (2) our hierarchical embedding model is the first latent space model that jointly learns distributed representations for queries, products and users with a deep neural network; (3) each component of our network is designed as a generative model so that the whole structure is explainable and extendable. Following the methodology of previous studies, we constructed personalized product search benchmarks with Amazon product data. Experiments show that our hierarchical embedding model significantly outperforms existing product search baselines on multiple benchmark datasets.", "title": "" } ]
1840279
BotMosaic: Collaborative Network Watermark for Botnet Detection
[ { "docid": "pos:1840279_0", "text": "Botnets are now recognized as one of the most serious security threats. In contrast to previous malware, botnets have the characteristic of a command and control (C&C) channel. Botnets also often use existing common protocols, e.g., IRC, HTTP, and in protocol-conforming manners. This makes the detection of botnet C&C a challenging problem. In this paper, we propose an approach that uses network-based anomaly detection to identify botnet C&C channels in a local area network without any prior knowledge of signatures or C&C server addresses. This detection approach can identify both the C&C servers and infected hosts in the network. Our approach is based on the observation that, because of the pre-programmed activities related to C&C, bots within the same botnet will likely demonstrate spatial-temporal correlation and similarity. For example, they engage in coordinated communication, propagation, and attack and fraudulent activities. Our prototype system, BotSniffer, can capture this spatial-temporal correlation in network traffic and utilize statistical algorithms to detect botnets with theoretical bounds on the false positive and false negative rates. We evaluated BotSniffer using many real-world network traces. The results show that BotSniffer can detect real-world botnets with high accuracy and has a very low false positive rate.", "title": "" } ]
[ { "docid": "neg:1840279_0", "text": "On the basis of the similarity between spinel and rocksalt structures, it is shown that some spinel oxides (e.g., MgCo2O4, etc) can be cathode materials for Mg rechargeable batteries around 150 °C. The Mg insertion into spinel lattices occurs via \"intercalation and push-out\" process to form a rocksalt phase in the spinel mother phase. For example, by utilizing the valence change from Co(III) to Co(II) in MgCo2O4, Mg insertion occurs at a considerably high potential of about 2.9 V vs. Mg2+/Mg, and similarly it occurs around 2.3 V vs. Mg2+/Mg with the valence change from Mn(III) to Mn(II) in MgMn2O4, being comparable to the ab initio calculation. The feasibility of Mg insertion would depend on the phase stability of the counterpart rocksalt XO of MgO in Mg2X2O4 or MgX3O4 (X = Co, Fe, Mn, and Cr). In addition, the normal spinel MgMn2O4 and MgCr2O4 can be demagnesiated to some extent owing to the robust host structure of Mg1-xX2O4, where the Mg extraction/insertion potentials for MgMn2O4 and MgCr2O4 are both about 3.4 V vs. Mg2+/Mg. Especially, the former \"intercalation and push-out\" process would provide a safe and stable design of cathode materials for polyvalent cations.", "title": "" }, { "docid": "neg:1840279_1", "text": "This paper investigates the detection and classification of fighting and pre and post fighting events when viewed from a video camera. Specifically we investigate normal, pre, post and actual fighting sequences and classify them. A hierarchical AdaBoost classifier is described and results using this approach are presented. We show it is possible to classify pre-fighting situations using such an approach and demonstrate how it can be used in the general case of continuous sequences.", "title": "" }, { "docid": "neg:1840279_2", "text": "The book Build Your Own Database Driven Website Using PHP & MySQL by Kevin Yank provides a hands-on look at what's involved in building a database-driven Web site. The author does a good job of patiently teaching the reader how to install and configure PHP 5 and MySQL to organize dynamic Web pages and put together a viable content management system. At just over 350 pages, the book is rather small compared to a lot of others on the topic, but it contains all the essentials. The author employs excellent teaching techniques to set up the foundation stone by stone and then grouts everything solidly together later in the book. This book aims at intermediate and advanced Web designers looking to make the leap to server-side programming. The author assumes his readers are comfortable with simple HTML. He provides an excellent introduction to PHP and MySQL (including installation) and explains how to make them work together. The amount of material he covers guarantees that almost any reader will benefit.", "title": "" }, { "docid": "neg:1840279_3", "text": "Patents are a very useful source of technical information. The public availability of patents over the Internet, with for some databases (eg. Espacenet) the assurance of a constant format, allows the development of high value added products using this information source and provides an easy way to analyze patent information. This simple and powerful tool facilitates the use of patents in academic research, in SMEs and in developing countries providing a way to use patents as a ideas resource thus improving technological innovation.", "title": "" }, { "docid": "neg:1840279_4", "text": "This study identifies evaluative, attitudinal, and behavioral factors that enhance or reduce the likelihood of consumers aborting intended online transactions (transaction abort likelihood). Path analyses show that risk perceptions associated with eshopping have direct influence on the transaction abort likelihood, whereas benefit perceptions do not. In addition, consumers who have favorable attitudes toward e-shopping, purchasing experiences from the Internet, and high purchasing frequencies from catalogs are less likely to abort intended transactions. The results also show that attitude toward e-shopping mediate relationships between the transaction abort likelihood and other predictors (i.e., effort saving, product offering, control in the information search, and time spent on the Internet per visit). # 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840279_5", "text": "In this paper we present ElastiFace, a simple and versatile method for establishing correspondence between textured face models, either for the construction of a blend-shape facial rig or for the exploration of new characters by morphing between a set of input models. While there exists a wide variety of approaches for inter-surface mapping and mesh morphing, most techniques are not suitable for our application: They either require the insertion of additional vertices, are limited to topological planes or spheres, are restricted to near-isometric input meshes, and/or are algorithmically and computationally involved. In contrast, our method extends linear non-rigid registration techniques to allow for strongly varying input geometries. It is geometrically intuitive, simple to implement, computationally efficient, and robustly handles highly non-isometric input models. In order to match the requirements of other applications, such as recent perception studies, we further extend our geometric matching to the matching of input textures and morphing of geometries and rendering styles.", "title": "" }, { "docid": "neg:1840279_6", "text": "Direct slicing of CAD models to generate process planning instructions for solid freeform fabrication may overcome inherent disadvantages of using stereolithography format in terms of the process accuracy, ease of file management, and incorporation of multiple materials. This paper will present the results of our development of a direct slicing algorithm for layered freeform fabrication. The direct slicing algorithm was based on a neutral, international standard (ISO 10303) STEP-formatted non-uniform rational B-spline (NURBS) geometric representation and is intended to be independent of any commercial CAD software. The following aspects of the development effort will be presented: (1) determination of optimal build direction based upon STEP-based NURBS models; (2) adaptive subdivision of NURBS data for geometric refinement; and (3) ray-casting slice generation into sets of raster patterns. The development also provides for multi-material slicing and will provide an effective tool in heterogeneous slicing processes. q 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840279_7", "text": "Learning behaviour of artificial agents is commonly studied in the framework of Reinforcement Learning. Reinforcement Learning gained increasing popularity in the past years. This is partially due to developments that enabled the possibility to employ complex function approximators, such as deep networks, in combination with the framework. Two of the core challenges in Reinforcement Learning are the correct assignment of credits over long periods of time and dealing with sparse rewards. In this thesis we propose a framework based on the notions of goals to tackle these problems. This work implements several components required to obtain a form of goal-directed behaviour, similar to how it is observed in human reasoning. This includes the representation of a goal space, learning how to set goals and finally how to reach them. The framework itself is build upon the options model, which is a common approach for representing temporally extended actions in Reinforcement Learning. All components of the proposed method can be implemented as deep networks and the complete system can be learned in an end-to-end fashion using standard optimization techniques. We evaluate the approach on a set of continuous control problems of increasing difficulty. We show, that we are able to solve a difficult gathering task, which poses a challenge to state-of-the-art Reinforcement Learning algorithms. The presented approach is furthermore able to scale to complex kinematic agents of the MuJoCo benchmark.", "title": "" }, { "docid": "neg:1840279_8", "text": "This paper proposes hybrid semiMarkov conditional random fields (SCRFs) for neural sequence labeling in natural language processing. Based on conventional conditional random fields (CRFs), SCRFs have been designed for the tasks of assigning labels to segments by extracting features from and describing transitions between segments instead of words. In this paper, we improve the existing SCRF methods by employing word-level and segment-level information simultaneously. First, word-level labels are utilized to derive the segment scores in SCRFs. Second, a CRF output layer and an SCRF output layer are integrated into an unified neural network and trained jointly. Experimental results on CoNLL 2003 named entity recognition (NER) shared task show that our model achieves state-of-the-art performance when no external knowledge is used.", "title": "" }, { "docid": "neg:1840279_9", "text": "Scientists and consumers preference focused on natural colorants due to the emergence of negative health effects of synthetic colorants which is used for many years in foods. Interest in natural colorants is increasing with each passing day as a consequence of their antimicrobial and antioxidant effects. The biggest obstacle in promotion of natural colorants as food pigment agents is that it requires high investment. For this reason, the R&D studies related issues are shifted to processes to reduce cost and it is directed to pigment production from microorganisms with fermentation. Nowadays, there is pigments obtained by commercially microorganisms or plants with fermantation. These pigments can be use for both food colorant and food supplement. In this review, besides colourant and antioxidant properties, antimicrobial properties of natural colorants are discussed.", "title": "" }, { "docid": "neg:1840279_10", "text": "Cloud computing has revolutionized the way computing and software services are delivered to the clients on demand. It offers users the ability to connect to computing resources and access IT managed services with a previously unknown level of ease. Due to this greater level of flexibility, the cloud has become the breeding ground of a new generation of products and services. However, the flexibility of cloud-based services comes with the risk of the security and privacy of users' data. Thus, security concerns among users of the cloud have become a major barrier to the widespread growth of cloud computing. One of the security concerns of cloud is data mining based privacy attacks that involve analyzing data over a long period to extract valuable information. In particular, in current cloud architecture a client entrusts a single cloud provider with his data. It gives the provider and outside attackers having unauthorized access to cloud, an opportunity of analyzing client data over a long period to extract sensitive information that causes privacy violation of clients. This is a big concern for many clients of cloud. In this paper, we first identify the data mining based privacy risks on cloud data and propose a distributed architecture to eliminate the risks.", "title": "" }, { "docid": "neg:1840279_11", "text": "Two literatures or sets of articles are complementary if, considered together, they can reveal useful information of scientik interest not apparent in either of the two sets alone. Of particular interest are complementary literatures that are also mutually isolated and noninteractive (they do not cite each other and are not co-cited). In that case, the intriguing possibility akrae that thm &tfnrmnt;nn n&wd hv mwnhXno them 4. nnvnl Lyww u-c “‘1 YLL”I&.L.sU”4L 6uy’“s. u, b..S..“Y.Ayj .a.-** Y ..u. -... During the past decade, we have identified seven examples of complementary noninteractive structures in the biomedical literature. Each structure led to a novel, plausible, and testable hypothesis that, in several cases, was subsequently corroborated by medical researchers through clinical or laboratory investigation. We have also developed, tested, and described a systematic, computer-sided approach to iinding and identifying complementary noninteractive literatures. Specialization, Fragmentation, and a Connection Explosion By some obscure spontaneous process scientists have responded to the growth of science by organizing their work into soecialties, thus permitting each individual to -r-~ focus on a small part of the total literature. Specialties that grow too large tend to divide into subspecialties that have their own literatures which, by a process of repeated splitting, maintain more or less fixed and manageable size. As the total literature grows, the number of specialties, but not in general the size of each, increases (Kochen, 1963; Swanson, 199Oc). But the unintended consequence of specialization is fragmentation. By dividing up the pie, the potential relationships among its pieces tend to be neglected. Although scientific literature cannot, in the long run, grow disproportionately to the growth of the communities and resources that produce it, combinations of implicitlyrelated segments of literature can grow much faster than the literature itself and can readily exceed the capacity of the community to identify and assimilate such relatedness (Swanson, 1993). The signilicance of the “information explosion” thus may lie not in an explosion of quantity per se, but in an incalculably greater combinatorial explosion of unnoticed and unintended logical connections. The Significance of Complementary Noninteractive Literatures If two literatures each of substantial size are linked by arguments that they respectively put forward -that is, are “logically” related, or complementary -one would expect to gain usefui information by combining them. For example, suppose that one (biomedical) literature establishes that some environmental factor A influences certain internal physiological conditions and a second literature establishes that these same physiological changes influence the course of disease C. Presumably, then, anyone who reads both literatures could conclude that factor A might influence disease C. Under such --->!L---f -----l-----ry-?r-. ----.---,a ?1-_----_I rl-conamons or comptementdnty one woum dtso expect me two literatures to refer to each other. If, however, the two literatures were developed independently of one another, the logical l inkage illustrated may be both unintended and unnoticed. To detect such mutual isolation, we examine the citation pattern. If two literatures are “noninteractive” that ir if thmv hnvm n~.rer fnr odAnm\\ kppn &ml = ulyc 1U) a. “W, na6L.V ..Y.“. ,“a vva&“..n] “W.. UluIu together, and if neither cites the other, then it is possible that scientists have not previously considered both iiteratures together, and so it is possible that no one is aware of the implicit A-C connection. The two conditions, complementarily and noninteraction, describe a model structure that shows how useful information can remain undiscovered even though its components consist of public knowledge (Swanson, 1987,199l). Public Knowledge / Private Knowledge There is, of course, no way to know in any particular case whether the possibility of an AC relationship in the above model has or has not occurred to someone, or whether or not anyone has actually considered the two literatures on A and C together, a private matter that necessarily remains conjectural. However, our argument is based only on determining whether there is any printed evidence to the contrary. We are concerned with public rather than Data Mining: Integration Q Application 295 From: KDD-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. private knowledge -with the state of the record produced rather than the state of mind of the producers (Swanson, 1990d). The point of bringing together the AB and BC literatures, in any event, is not to \"prove\" an AC linkage, (by considering only transitive relationships) but rather call attention to an apparently unnoticed association that may be worth investigating. In principle any chain of scientific, including analogic, reasoning in which different links appear in noninteractive literatures may lead to the discovery of new interesting connections. \"What people know\" is a common u derstanding of what is meant by \"knowledge\". If taken in this subjective sense, the idea of \"knowledge discovery\" could mean merely that someone discovered something they hadn’t known before. Our focus in the present paper is on a second sense of the word \"knowledge\", a meaning associated with the products of human i tellectual activity, as encoded in the public record, rather than with the contents of the human mind. This abstract world of human-created \"objective\" knowledge is open to exploration and discovery, for it can contain territory that is subjectively unknown to anyone (Popper, 1972). Our work is directed toward the discovery of scientificallyuseful information implicit in the public record, but not previously made xplicit. The problem we address concerns structures within the scientific literature, not within the mind. The Process of Finding Complementary Noninteractive Literatures During the past ten years, we have pursued three goals: i) to show in principle how new knowledge might be gained by synthesizing logicallyrelated noninteractive literatures; ii) to demonstrate that such structures do exist, at least within the biomedical literature; and iii) to develop a systematic process for finding them. In pursuit of goal iii, we have created interactive software and database arch strategies that can facilitate the discovery of complementary st uctures in the published literature of science. The universe or searchspace under consideration is limited only by the coverage of the major scientific databases, though we have focused primarily on the biomedical field and the MEDLINE database (8 million records). In 1991, a systematic approach to finding complementary structures was outlined and became a point of departure for software development (Swanson, 1991). The system that has now taken shape is based on a 3-way interaction between computer software, bibliographic databases, and a human operator. Tae interaction generates information structtues that are used heuristically to guide the search for promising complementary literatures. The user of the system begins by choosing a question 296 Technology Spotlight or problem area of scientific interest that can be associated with a literature, C. Elsewhere we describe and evaluate experimental computer software, which we call ARROWSMITH (Swanson & Smalheiser, 1997), that performs two separate functions that can be used independently. The first function produces a list of candidates for a second literature, A, complementary o C, from which the user can select one candidate (at a time) an input, along with C, to the second function. This first function can be considered as a computer-assisted process of problem-discovery, an issue identified in the AI literature (Langley, et al., 1987; p304-307). Alternatively, the user may wish to identify a second literature, A, as a conjecture or hypothesis generated independently of the computer-produced list of candidates. Our approach has been based on the use of article titles as a guide to identifying complementary literatures. As indicated above, our point of departure for the second function is a tentative scientific hypothesis associated with two literalxtres, A and C. A title-word search of MEDLINE is used to create two local computer title-files associated with A and C, respectively. These files are used as input to the ARROWSMITH software, which then produces a list of all words common to the two sets of titles, except for words excluded by an extensive stoplist (presently about 5000 words). The resulting list of words provides the basis for identifying title-word pathways that might provide clues to the presence of complementary arguments within the literatures corresponding to A and C. The output of this procedure is a structured titledisplay (plus journal citation), that serves as a heuristic aid to identifying word-linked titles and serves also as an organized guide to the literature.", "title": "" }, { "docid": "neg:1840279_12", "text": "In this article, we address the cross-domain (i.e., street and shop) clothing retrieval problem and investigate its real-world applications for online clothing shopping. It is a challenging problem due to the large discrepancy between street and shop domain images. We focus on learning an effective feature-embedding model to generate robust and discriminative feature representation across domains. Existing triplet embedding models achieve promising results by finding an embedding metric in which the distance between negative pairs is larger than the distance between positive pairs plus a margin. However, existing methods do not address the challenges in the cross-domain clothing retrieval scenario sufficiently. First, the intradomain and cross-domain data relationships need to be considered simultaneously. Second, the number of matched and nonmatched cross-domain pairs are unbalanced. To address these challenges, we propose a deep cross-triplet embedding algorithm together with a cross-triplet sampling strategy. The extensive experimental evaluations demonstrate the effectiveness of the proposed algorithms well. Furthermore, we investigate two novel online shopping applications, clothing trying on and accessories recommendation, based on a unified cross-domain clothing retrieval framework.", "title": "" }, { "docid": "neg:1840279_13", "text": "With the development of the web of data, recent statistical, data-to-text generation approaches have focused on mapping data (e.g., database records or knowledge-base (KB) triples) to natural language. In contrast to previous grammar-based approaches, this more recent work systematically eschews syntax and learns a direct mapping between meaning representations and natural language. By contrast, I argue that an explicit model of syntax can help support NLG in several ways. Based on case studies drawn from KB-to-text generation, I show that syntax can be used to support supervised training with little training data; to ensure domain portability; and to improve statistical hypertagging.", "title": "" }, { "docid": "neg:1840279_14", "text": "Hybrid unmanned aircraft, that combine hover capability with a wing for fast and efficient forward flight, have attracted a lot of attention in recent years. Many different designs are proposed, but one of the most promising is the tailsitter concept. However, tailsitters are difficult to control across the entire flight envelope, which often includes stalled flight. Additionally, their wing surface makes them susceptible to wind gusts. In this paper, we propose incremental nonlinear dynamic inversion control for the attitude and position control. The result is a single, continuous controller, that is able to track the acceleration of the vehicle across the flight envelope. The proposed controller is implemented on the Cyclone hybrid UAV. Multiple outdoor experiments are performed, showing that unmodeled forces and moments are effectively compensated by the incremental control structure, and that accelerations can be tracked across the flight envelope. Finally, we provide a comprehensive procedure for the implementation of the controller on other types of hybrid UAVs.", "title": "" }, { "docid": "neg:1840279_15", "text": "Fear appeals are a polarizing issue, with proponents confident in their efficacy and opponents confident that they backfire. We present the results of a comprehensive meta-analysis investigating fear appeals' effectiveness for influencing attitudes, intentions, and behaviors. We tested predictions from a large number of theories, the majority of which have never been tested meta-analytically until now. Studies were included if they contained a treatment group exposed to a fear appeal, a valid comparison group, a manipulation of depicted fear, a measure of attitudes, intentions, or behaviors concerning the targeted risk or recommended solution, and adequate statistics to calculate effect sizes. The meta-analysis included 127 articles (9% unpublished) yielding 248 independent samples (NTotal = 27,372) collected from diverse populations. Results showed a positive effect of fear appeals on attitudes, intentions, and behaviors, with the average effect on a composite index being random-effects d = 0.29. Moderation analyses based on prominent fear appeal theories showed that the effectiveness of fear appeals increased when the message included efficacy statements, depicted high susceptibility and severity, recommended one-time only (vs. repeated) behaviors, and targeted audiences that included a larger percentage of female message recipients. Overall, we conclude that (a) fear appeals are effective at positively influencing attitude, intentions, and behaviors; (b) there are very few circumstances under which they are not effective; and (c) there are no identified circumstances under which they backfire and lead to undesirable outcomes.", "title": "" }, { "docid": "neg:1840279_16", "text": "In recent years, there has been a growing interest in designing multi-robot systems (hereafter MRSs) to provide cost effective, fault-tolerant and reliable solutions to a variety of automated applications. Here, we review recent advancements in MRSs specifically designed for cooperative object transport, which requires the members of MRSs to coordinate their actions to transport objects from a starting position to a final destination. To achieve cooperative object transport, a wide range of transport, coordination and control strategies have been proposed. Our goal is to provide a comprehensive summary for this relatively heterogeneous and fast-growing body of scientific literature. While distilling the information, we purposefully avoid using hierarchical dichotomies, which have been traditionally used in the field of MRSs. Instead, we employ a coarse-grain approach by classifying each study based on the transport strategy used; pushing-only, grasping and caging. We identify key design constraints that may be shared among these studies despite considerable differences in their design methods. In the end, we discuss several open challenges and possible directions for future work to improve the performance of the current MRSs. Overall, we hope to increasethe visibility and accessibility of the excellent studies in the field and provide a framework that helps the reader to navigate through them more effectively.", "title": "" }, { "docid": "neg:1840279_17", "text": "A cardiac circumstance affected through irregular electrical action of the heart is called an arrhythmia. A noninvasive method called Electrocardiogram (ECG) is used to diagnosis arrhythmias or irregularities of the heart. The difficulty encountered by doctors in the analysis of heartbeat irregularities id due to the non-stationary of ECG signal, the existence of noise and the abnormality of the heartbeat. The computer-assisted study of ECG signal supports doctors to diagnoses diseases of cardiovascular. The major limitations of all the ECG signal analysis of arrhythmia detection are because to the non-stationary behavior of the ECG signals and unobserved information existent in the ECG signals. In addition, detection based on Extreme learning machine (ELM) has become a common technique in machine learning. However, it easily suffers from overfitting. This paper proposes a hybrid classification technique using Bayesian and Extreme Learning Machine (B-ELM) technique for heartbeat recognition of arrhythmia detection AD. The proposed technique is capable of detecting arrhythmia classes with a maximum accuracy of (98.09%) and less computational time about 2.5s.", "title": "" }, { "docid": "neg:1840279_18", "text": "Entity typing is an essential task for constructing a knowledge base. However, many non-English knowledge bases fail to type their entities due to the absence of a reasonable local hierarchical taxonomy. Since constructing a widely accepted taxonomy is a hard problem, we propose to type these non-English entities with some widely accepted taxonomies in English, such as DBpedia, Yago and Freebase. We define this problem as cross-lingual type inference. In this paper, we present CUTE to type Chinese entities with DBpedia types. First we exploit the cross-lingual entity linking between Chinese and English entities to construct the training data. Then we propose a multi-label hierarchical classification algorithm to type these Chinese entities. Experimental results show the effectiveness and efficiency of our method.", "title": "" }, { "docid": "neg:1840279_19", "text": "Folliculitis decalvans is an inflammatory presentation of cicatrizing alopecia characterized by inflammatory perifollicular papules and pustules. It generally occurs in adult males, predominantly involving the vertex and occipital areas of the scalp. The use of dermatoscopy in hair and scalp diseases improves diagnostic accuracy. Some trichoscopic findings, such as follicular tufts, perifollicular erythema, crusts and pustules, can be observed in folliculitis decalvans. More research on the pathogenesis and treatment options of this disfiguring disease is required for improving patient management.", "title": "" } ]
1840280
The DSM diagnostic criteria for gender identity disorder in adolescents and adults.
[ { "docid": "pos:1840280_0", "text": "The sexual behaviors and attitudes of male-to-female (MtF) transsexuals have not been investigated systematically. This study presents information about sexuality before and after sex reassignment surgery (SRS), as reported by 232 MtF patients of one surgeon. Data were collected using self-administered questionnaires. The mean age of participants at time of SRS was 44 years (range, 18-70 years). Before SRS, 54% of participants had been predominantly attracted to women and 9% had been predominantly attracted to men. After SRS, these figures were 25% and 34%, respectively.Participants' median numbers of sexual partners before SRS and in the last 12 months after SRS were 6 and 1, respectively. Participants' reported number of sexual partners before SRS was similar to the number of partners reported by male participants in the National Health and Social Life Survey (NHSLS). After SRS, 32% of participants reported no sexual partners in the last 12 months, higher than reported by male or female participants in the NHSLS. Bisexual participants reported more partners before and after SRS than did other participants. 49% of participants reported hundreds of episodes or more of sexual arousal to cross-dressing or cross-gender fantasy (autogynephilia) before SRS; after SRS, only 3% so reported. More frequent autogynephilic arousal after SRS was correlated with more frequent masturbation, a larger number of sexual partners, and more frequent partnered sexual activity. 85% of participants experienced orgasm at least occasionally after SRS and 55% ejaculated with orgasm.", "title": "" }, { "docid": "pos:1840280_1", "text": "The present study reports on the construction of a dimensional measure of gender identity (gender dysphoria) for adolescents and adults. The 27-item gender identity/gender dysphoria questionnaire for adolescents and adults (GIDYQ-AA) was administered to 389 university students (heterosexual and nonheterosexual) and 73 clinic-referred patients with gender identity disorder. Principal axis factor analysis indicated that a one-factor solution, accounting for 61.3% of the total variance, best fits the data. Factor loadings were all >or= .30 (median, .82; range, .34-.96). A mean total score (Cronbach's alpha, .97) was computed, which showed strong evidence for discriminant validity in that the gender identity patients had significantly more gender dysphoria than both the heterosexual and nonheterosexual university students. Using a cut-point of 3.00, we found the sensitivity was 90.4% for the gender identity patients and specificity was 99.7% for the controls. The utility of the GIDYQ-AA is discussed.", "title": "" } ]
[ { "docid": "neg:1840280_0", "text": "In this article we tackle the issue of searchable encryption with a generalized query model. Departing from many previous works that focused on queries consisting of a single keyword, we consider the the case of queries consisting of arbitrary boolean expressions on keywords, that is to say conjunctions and disjunctions of keywords and their complement. Our construction of boolean symmetric searchable encryption BSSE is mainly based on the orthogonalization of the keyword field according to the Gram-Schmidt process. Each document stored in an outsourced server is associated with a label which contains all the keywords corresponding to the document, and searches are performed by way of a simple inner product. Furthermore, the queries in the BSSE scheme are randomized. This randomization hides the search pattern of the user since the search results cannot be associated deterministically to queries. We formally define an adaptive security model for the BSSE scheme. In addition, the search complexity is in $O(n)$ where $n$ is the number of documents stored in the outsourced server.", "title": "" }, { "docid": "neg:1840280_1", "text": "Technology offers the potential to objectively monitor people's eating and activity behaviors and encourage healthier lifestyles. BALANCE is a mobile phone-based system for long term wellness management. The BALANCE system automatically detects the user's caloric expenditure via sensor data from a Mobile Sensing Platform unit worn on the hip. Users manually enter information on foods eaten via an interface on an N95 mobile phone. Initial validation experiments measuring oxygen consumption during treadmill walking and jogging show that the system's estimate of caloric output is within 87% of the actual value. Future work will refine and continue to evaluate the system's efficacy and develop more robust data input and activity inference methods.", "title": "" }, { "docid": "neg:1840280_2", "text": "In this paper, we introduce a new large-scale dataset of ships, called SeaShips, which is designed for training and evaluating ship object detection algorithms. The dataset currently consists of 31 455 images and covers six common ship types (ore carrier, bulk cargo carrier, general cargo ship, container ship, fishing boat, and passenger ship). All of the images are from about 10 080 real-world video segments, which are acquired by the monitoring cameras in a deployed coastline video surveillance system. They are carefully selected to mostly cover all possible imaging variations, for example, different scales, hull parts, illumination, viewpoints, backgrounds, and occlusions. All images are annotated with ship-type labels and high-precision bounding boxes. Based on the SeaShips dataset, we present the performance of three detectors as a baseline to do the following: 1) elementarily summarize the difficulties of the dataset for ship detection; 2) show detection results for researchers using the dataset; and 3) make a comparison to identify the strengths and weaknesses of the baseline algorithms. In practice, the SeaShips dataset would hopefully advance research and applications on ship detection.", "title": "" }, { "docid": "neg:1840280_3", "text": "This paper proposes two RF self-interference cancellation techniques. Their small form-factor enables full-duplex communication links for small-to-medium size portable devices and hence promotes the adoption of full-duplex in mass-market applications and next-generation standards, e.g. IEEE802.11 and 5G. Measured prototype implementations of an electrical balance duplexer and a dual-polarized antenna both achieve >50 dB self-interference suppression at RF, operating in the ISM band at 2.45GHz.", "title": "" }, { "docid": "neg:1840280_4", "text": "The use of tablet PCs is spreading rapidly, and accordingly users browsing and inputting personal information in public spaces can often be seen by third parties. Unlike conventional mobile phones and notebook PCs equipped with distinct input devices (e.g., keyboards), tablet PCs have touchscreen keyboards for data input. Such integration of display and input device increases the potential for harm when the display is captured by malicious attackers. This paper presents the description of reconstructing tablet PC displays via measurement of electromagnetic (EM) emanation. In conventional studies, such EM display capture has been achieved by using non-portable setups. Those studies also assumed that a large amount of time was available in advance of capture to obtain the electrical parameters of the target display. In contrast, this paper demonstrates that such EM display capture is feasible in real time by a setup that fits in an attaché case. The screen image reconstruction is achieved by performing a prior course profiling and a complemental signal processing instead of the conventional fine parameter tuning. Such complemental processing can eliminate the differences of leakage parameters among individuals and therefore correct the distortions of images. The attack distance, 2 m, makes this method a practical threat to general tablet PCs in public places. This paper discusses possible attack scenarios based on the setup described above. In addition, we describe a mechanism of EM emanation from tablet PCs and a countermeasure against such EM display capture.", "title": "" }, { "docid": "neg:1840280_5", "text": "Graph-based recommendation approaches can model associations between users and items alongside additional contextual information. Recent studies demonstrated that representing features extracted from social media (SM) auxiliary data, like friendships, jointly with traditional users/items ratings in the graph, contribute to recommendation accuracy. In this work, we take a step further and propose an extended graph representation that includes socio-demographic and personal traits extracted from the content posted by the user on SM. Empirical results demonstrate that processing unstructured textual information collected from Twitter and representing it in structured form in the graph improves recommendation performance, especially in cold start conditions.", "title": "" }, { "docid": "neg:1840280_6", "text": "Since an ever-increasing part of the population makes use of social media in their day-to-day lives, social media data is being analysed in many different disciplines. The social media analytics process involves four distinct steps, data discovery, collection, preparation, and analysis. While there is a great deal of literature on the challenges and difficulties involving specific data analysis methods, there hardly exists research on the stages of data discovery, collection, and preparation. To address this gap, we conducted an extended and structured literature analysis through which we identified challenges addressed and solutions proposed. The literature search revealed that the volume of data was most often cited as a challenge by researchers. In contrast, other categories have received less attention. Based on the results of the literature search, we discuss the most important challenges for researchers and present potential solutions. The findings are used to extend an existing framework on social media analytics. The article provides benefits for researchers and practitioners who wish to collect and analyse social media data.", "title": "" }, { "docid": "neg:1840280_7", "text": "David Moher is at the Ottawa Methods Centre, Ottawa Hospital Research Institute, and the Department of Epidemiology and Community Medicine, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada. Alessandro Liberati is at the Università di Modena e Reggio Emilia, Modena, and the Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy. Jennifer Tetzlaff is at the Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Ontario. Douglas G Altman is at the Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom. Membership of the PRISMA Group is provided in the Acknowledgements.", "title": "" }, { "docid": "neg:1840280_8", "text": "A magnetic-tip steerable needle is presented with application to aiding deep brain stimulation electrode placement. The magnetic needle is 1.3mm in diameter at the tip with a 0.7mm diameter shaft, which is selected to match the size of a deep brain stimulation electrode. The tip orientation is controlled by applying torques to the embedded neodymium-iron-boron permanent magnets with a clinically-sized magnetic-manipulation system. The prototype design is capable of following trajectories under human-in-the-loop control with minimum bend radii of 100mm without inducing tissue damage and down to 30mm if some tissue damage is tolerable. The device can be retracted and redirected to reach multiple targets with a single insertion point.", "title": "" }, { "docid": "neg:1840280_9", "text": "The importance of information as a resource for economic growth and education is steadily increasing. Due to technological advances in computer industry and the explosive growth of the Internet much valuable information will be available in digital libraries. This paper introduces a system that aims to support a user's browsing activities in document sets retrieved from a digital library. Latent Semantic Analysis is applied to extract salient semantic structures and citation patterns of documents stored in a digital library in a computationally expensive batch job. At retrieval time, cluster techniques are used to organize retrieved documents into clusters according to the previously extracted semantic similarities. A modified Boltzman algorithm [1] is employed to spatially organize the resulting clusters and their documents in the form of a three-dimensional information landscape or \"i-scape\". The i-scape is then displayed for interactive exploration via a multi-modal, virtual reality CAVE interface [8]. Users' browsing activities are recorded and user models are extracted to give newcomers online help based on previous navigation activity as well as to enable experienced users to recognize and exploit past user traces. In this way, the system provides interactive services to assist users in the spatial navigation, interpretation, and detailed exploration of potentially large document sets matching a query.", "title": "" }, { "docid": "neg:1840280_10", "text": "We introduce MIDI-VAE, a neural network model based on Variational Autoencoders that is capable of handling polyphonic music with multiple instrument tracks, as well as modeling the dynamics of music by incorporating note durations and velocities. We show that MIDI-VAE can perform style transfer on symbolic music by automatically changing pitches, dynamics and instruments of a music piece from, e.g., a Classical to a Jazz style. We evaluate the efficacy of the style transfer by training separate style validation classifiers. Our model can also interpolate between short pieces of music, produce medleys and create mixtures of entire songs. The interpolations smoothly change pitches, dynamics and instrumentation to create a harmonic bridge between two music pieces. To the best of our knowledge, this work represents the first successful attempt at applying neural style transfer to complete musical compositions.", "title": "" }, { "docid": "neg:1840280_11", "text": "Block-based programming languages like Scratch, Alice and Blockly are becoming increasingly common as introductory languages in programming education. There is substantial research showing that these visual programming environments are suitable for teaching programming concepts. But, what do people do when they use Scratch? In this paper we explore the characteristics of Scratch programs. To this end we have scraped the Scratch public repository and retrieved 250,000 projects. We present an analysis of these projects in three different dimensions. Initially, we look at the types of blocks used and the size of the projects. We then investigate complexity, used abstractions and programming concepts. Finally we detect code smells such as large scripts, dead code and duplicated code blocks. Our results show that 1) most Scratch programs are small, however Scratch programs consisting of over 100 sprites exist, 2) programming abstraction concepts like procedures are not commonly used and 3) Scratch programs do suffer from code smells including large scripts and unmatched broadcast signals.", "title": "" }, { "docid": "neg:1840280_12", "text": "One of the biggest problems of SMEs is their tendencies to financial distress because of insufficient finance background. In this study, an early warning system (EWS) model based on data mining for financial risk detection is presented. CHAID algorithm has been used for development of the EWS. Developed EWS can be served like a tailor made financial advisor in decision making process of the firms with its automated nature to the ones who have inadequate financial background. Besides, an application of the model implemented which covered 7853 SMEs based on Turkish Central Bank (TCB) 2007 data. By using EWS model, 31 risk profiles, 15 risk indicators, 2 early warning signals, and 4 financial road maps has been determined for financial risk mitigation. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840280_13", "text": "OBJECTIVES\nTo analyze the Spanish experience in an international study which evaluated tocilizumab in patients with rheumatoid arthritis (RA) and an inadequate response to conventional disease-modifying antirheumatic drugs (DMARDs) or tumor necrosis factor inhibitors (TNFis) in a clinical practice setting.\n\n\nMATERIAL AND METHODS\nSubanalysis of 170 patients with RA from Spain who participated in a phase IIIb, open-label, international clinical trial. Patients presented inadequate response to DMARDs or TNFis. They received 8mg/kg of tocilizumab every 4 weeks in combination with a DMARD or as monotherapy during 20 weeks. Safety and efficacy of tocilizumab were analyzed. Special emphasis was placed on differences between failure to a DMARD or to a TNFi and the need to switch to tocilizumab with or without a washout period in patients who had previously received TNFi.\n\n\nRESULTS\nThe most common adverse events were infections (25%), increased total cholesterol (38%) and transaminases (15%). Five patients discontinued the study due to an adverse event. After six months of tocilizumab treatment, 71/50/30% of patients had ACR 20/50/70 responses, respectively. A higher proportion of TNFi-naive patients presented an ACR20 response: 76% compared to 64% in the TNFi group with previous washout and 66% in the TNFi group without previous washout.\n\n\nCONCLUSIONS\nSafety results were consistent with previous results in patients with RA and an inadequate response to DMARDs or TNFis. Tocilizumab is more effective in patients who did not respond to conventional DMARDs than in patients who did not respond to TNFis.", "title": "" }, { "docid": "neg:1840280_14", "text": "Cardiac arrhythmia is one of the most important indicators of heart disease. Premature ventricular contractions (PVCs) are a common form of cardiac arrhythmia caused by ectopic heartbeats. The detection of PVCs by means of ECG (electrocardiogram) signals is important for the prediction of possible heart failure. This study focuses on the classification of PVC heartbeats from ECG signals and, in particular, on the performance evaluation of selected features using genetic algorithms (GA) to the classification of PVC arrhythmia. The objective of this study is to apply GA as a feature selection method to select the best feature subset from 200 time series features and to integrate these best features to recognize PVC forms. Neural networks, support vector machines and k-nearest neighbour classification algorithms were used. Findings were expressed in terms of accuracy, sensitivity, and specificity for the MIT-BIH Arrhythmia Database. The results showed that the proposed model achieved higher accuracy rates than those of other works on this topic.", "title": "" }, { "docid": "neg:1840280_15", "text": "We propose a novel and flexible anchor mechanism named MetaAnchor for object detection frameworks. Unlike many previous detectors model anchors via a predefined manner, in MetaAnchor anchor functions could be dynamically generated from the arbitrary customized prior boxes. Taking advantage of weight prediction, MetaAnchor is able to work with most of the anchor-based object detection systems such as RetinaNet. Compared with the predefined anchor scheme, we empirically find that MetaAnchor is more robust to anchor settings and bounding box distributions; in addition, it also shows the potential on transfer tasks. Our experiment on COCO detection task shows that MetaAnchor consistently outperforms the counterparts in various scenarios.", "title": "" }, { "docid": "neg:1840280_16", "text": "In this paper, we focus on differential privacy preserving spectral graph analysis. Spectral graph analysis deals with the analysis of the spectra (eigenvalues and eigenvector components) of the graph’s adjacency matrix or its variants. We develop two approaches to computing the ε-differential eigen decomposition of the graph’s adjacency matrix. The first approach, denoted as LNPP, is based on the Laplace Mechanism that calibrates Laplace noise on the eigenvalues and every entry of the eigenvectors based on their sensitivities. We derive the global sensitivities of both eigenvalues and eigenvectors based on the matrix perturbation theory. Because the output eigenvectors after perturbation are no longer orthogonormal, we postprocess the output eigenvectors by using the state-of-the-art vector orthogonalization technique. The second approach, denoted as SBMF, is based on the exponential mechanism and the properties of the matrix Bingham-von Mises-Fisher density for network data spectral analysis. We prove that the sampling procedure achieves differential privacy. We conduct empirical evaluation on a real social network data and compare the two approaches in terms of utility preservation (the accuracy of spectra and the accuracy of low rank approximation) under the same differential privacy threshold. Our empirical evaluation results show that LNPP generally incurs smaller utility loss.", "title": "" }, { "docid": "neg:1840280_17", "text": "The development of the concept of burden for use in research lacks consistent conceptualization and operational definitions. The purpose of this article is to analyze the concept of burden in an effort to promote conceptual clarity. The technique advocated by Walker and Avant is used to analyze this concept. Critical attributes of burden include subjective perception, multidimensional phenomena, dynamic change, and overload. Predisposing factors are caregiver's characteristics, the demands of caregivers, and the involvement in caregiving. The consequences of burden generate problems in care-receiver, caregiver, family, and health care system. Overall, this article enables us to advance this concept, identify the different sources of burden, and provide directions for nursing intervention.", "title": "" } ]
1840281
Wide-area scene mapping for mobile visual tracking
[ { "docid": "pos:1840281_0", "text": "A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.", "title": "" } ]
[ { "docid": "neg:1840281_0", "text": "The deep convolutional neural networks have achieved significant improvements in accuracy and speed for single image super-resolution. However, as the depth of network grows, the information flow is weakened and the training becomes harder and harder. On the other hand, most of the models adopt a single-stream structure with which integrating complementary contextual information under different receptive fields is difficult. To improve information flow and to capture sufficient knowledge for reconstructing the high-frequency details, we propose a cascaded multi-scale cross network (CMSC) in which a sequence of subnetworks is cascaded to infer high resolution features in a coarse-to-fine manner. In each cascaded subnetwork, we stack multiple multi-scale cross (MSC) modules to fuse complementary multi-scale information in an efficient way as well as to improve information flow across the layers. Meanwhile, by introducing residual-features learning in each stage, the relative information between high-resolution and low-resolution features is fully utilized to further boost reconstruction performance. We train the proposed network with cascaded-supervision and then assemble the intermediate predictions of the cascade to achieve high quality image reconstruction. Extensive quantitative and qualitative evaluations on benchmark datasets illustrate the superiority of our proposed method over state-of-the-art superresolution methods.", "title": "" }, { "docid": "neg:1840281_1", "text": "Detailed 3D visual models of indoor spaces, from walls and floors to objects and their configurations, can provide extensive knowledge about the environments as well as rich contextual information of people living therein. Vision-based 3D modeling has only seen limited success in applications, as it faces many technical challenges that only a few experts understand, let alone solve. In this work we utilize (Kinect style) consumer depth cameras to enable non-expert users to scan their personal spaces into 3D models. We build a prototype mobile system for 3D modeling that runs in real-time on a laptop, assisting and interacting with the user on-the-fly. Color and depth are jointly used to achieve robust 3D registration. The system offers online feedback and hints, tolerates human errors and alignment failures, and helps to obtain complete scene coverage. We show that our prototype system can both scan large environments (50 meters across) and at the same time preserve fine details (centimeter accuracy). The capability of detailed 3D modeling leads to many promising applications such as accurate 3D localization, measuring dimensions, and interactive visualization.", "title": "" }, { "docid": "neg:1840281_2", "text": "Published version ATTWOOD, F. (2005). What do people do with porn? qualitative research into the consumption, use and experience of pornography and other sexually explicit media. Sexuality and culture, 9 (2), 65-86. one copy of any article(s) in SHURA to facilitate their private study or for non-commercial research. You may not engage in further distribution of the material or use it for any profit-making activities or any commercial gain.", "title": "" }, { "docid": "neg:1840281_3", "text": "Software simulation tools supporting a teaching process are highly accepted by both teachers and students. We discuss the possibility of using automata simulators in theoretical computer science courses. The main purpose of this article is to propose key features and requirements of well designed automata simulator and to present our tool SimStudio -- integrated simulator of finite automaton, pushdown automaton, Turing machine, RAM with extension and abacus machine. The aim of this paper is to report our experiences with using of our automata simulators in teaching of the course \"Fundamentals of Theoretical Computer Science\" in bachelor educational program in software engineering held at Faculty of Informatics and Information Technologies, Slovak University of Technology in Bratislava.", "title": "" }, { "docid": "neg:1840281_4", "text": "Optimistic estimates suggest that only 30-70% of waste generated in cities of developing countries is collected for disposal. As a result, uncollected waste is often disposed of into open dumps, along the streets or into water bodies. Quite often, this practice induces environmental degradation and public health risks. Notwithstanding, such practices also make waste materials readily available for itinerant waste pickers. These 'scavengers' as they are called, therefore perceive waste as a resource, for income generation. Literature suggests that Informal Sector Recycling (ISR) activity can bring other benefits such as, economic growth, litter control and resources conservation. This paper critically reviews trends in ISR activities in selected developing and transition countries. ISR often survives in very hostile social and physical environments largely because of negative Government and public attitude. Rather than being stigmatised, the sector should be recognised as an important element for achievement of sustainable waste management in developing countries. One solution to this problem could be the integration of ISR into the formal waste management system. To achieve ISR integration, this paper highlights six crucial aspects from literature: social acceptance, political will, mobilisation of cooperatives, partnerships with private enterprises, management and technical skills, as well as legal protection measures. It is important to note that not every country will have the wherewithal to achieve social inclusion and so the level of integration must be 'flexible'. In addition, the structure of the ISR should not be based on a 'universal' model but should instead take into account local contexts and conditions.", "title": "" }, { "docid": "neg:1840281_5", "text": "Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities. However, an open challenge in this area is developing techniques that can go beyond simple edge prediction and handle more complex logical queries, which might involve multiple unobserved edges, entities, and variables. For instance, given an incomplete biological knowledge graph, we might want to predict what drugs are likely to target proteins involved with both diseases X and Y?—a query that requires reasoning about all possible proteins that might interact with diseases X and Y. Here we introduce a framework to efficiently make predictions about conjunctive logical queries—a flexible but tractable subset of first-order logic—on incomplete knowledge graphs. In our approach, we embed graph nodes in a low-dimensional space and represent logical operators as learned geometric operations (e.g., translation, rotation) in this embedding space. By performing logical operations within a low-dimensional embedding space, our approach achieves a time complexity that is linear in the number of query variables, compared to the exponential complexity required by a naive enumeration-based approach. We demonstrate the utility of this framework in two application studies on real-world datasets with millions of relations: predicting logical relationships in a network of drug-gene-disease interactions and in a graph-based representation of social interactions derived from a popular web forum.", "title": "" }, { "docid": "neg:1840281_6", "text": "The P value is a measure of statistical evidence that appears in virtually all medical research papers. Its interpretation is made extraordinarily difficult because it is not part of any formal system of statistical inference. As a result, the P value's inferential meaning is widely and often wildly misconstrued, a fact that has been pointed out in innumerable papers and books appearing since at least the 1940s. This commentary reviews a dozen of these common misinterpretations and explains why each is wrong. It also reviews the possible consequences of these improper understandings or representations of its meaning. Finally, it contrasts the P value with its Bayesian counterpart, the Bayes' factor, which has virtually all of the desirable properties of an evidential measure that the P value lacks, most notably interpretability. The most serious consequence of this array of P-value misconceptions is the false belief that the probability of a conclusion being in error can be calculated from the data in a single experiment without reference to external evidence or the plausibility of the underlying mechanism.", "title": "" }, { "docid": "neg:1840281_7", "text": "BACKGROUND\n47 XXY/46 XX mosaicism with characteristics suggesting Klinefelter syndrome is very rare and at present, only seven cases have been reported in the literature.\n\n\nCASE PRESENTATION\nWe report an Indian boy diagnosed as variant of Klinefelter syndrome with 47 XXY/46 XX mosaicism at age 12 years. He was noted to have right cryptorchidism and chordae at birth, but did not have surgery for these until age 3 years. During surgery, the right gonad was atrophic and removed. Histology revealed atrophic ovarian tissue. Pelvic ultrasound showed no Mullerian structures. There was however no clinical follow up and he was raised as a boy. At 12 years old he was re-evaluated because of parental concern about his 'female' body habitus. He was slightly overweight, had eunuchoid body habitus with mild gynaecomastia. The right scrotal sac was empty and a 2mls testis was present in the left scrotum. Penile length was 5.2 cm and width 2.0 cm. There was absent pubic or axillary hair. Pronation and supination of his upper limbs were reduced and x-ray of both elbow joints revealed bilateral radioulnar synostosis. The baseline laboratory data were LH < 0.1 mIU/ml, FSH 1.4 mIU/ml, testosterone 0.6 nmol/L with raised estradiol, 96 pmol/L. HCG stimulation test showed poor Leydig cell response. The karyotype based on 76 cells was 47 XXY[9]/46 XX[67] with SRY positive. Laparoscopic examination revealed no Mullerian structures.\n\n\nCONCLUSION\nInsisting on an adequate number of cells (at least 50) to be examined during karyotyping is important so as not to miss diagnosing mosaicism.", "title": "" }, { "docid": "neg:1840281_8", "text": "The rise of blockchain technologies has given a boost to social good projects, which are trying to exploit various characteristic features of blockchains: the quick and inexpensive transfer of cryptocurrency, the transparency of transactions, the ability to tokenize any kind of assets, and the increase in trustworthiness due to decentralization. However, the swift pace of innovation in blockchain technologies, and the hype that has surrounded their \"disruptive potential\", make it difficult to understand whether these technologies are applied correctly, and what one should expect when trying to apply them to social good projects. This paper addresses these issues, by systematically analysing a collection of 120 blockchain-enabled social good projects. Focussing on measurable and objective aspects, we try to answer various relevant questions: which features of blockchains are most commonly used? Do projects have success in fund raising? Are they making appropriate choices on the blockchain architecture? How many projects are released to the public, and how many are eventually abandoned?", "title": "" }, { "docid": "neg:1840281_9", "text": "Quorum-sensing bacteria communicate with extracellular signal molecules called autoinducers. This process allows community-wide synchronization of gene expression. A screen for additional components of the Vibrio harveyi and Vibrio cholerae quorum-sensing circuits revealed the protein Hfq. Hfq mediates interactions between small, regulatory RNAs (sRNAs) and specific messenger RNA (mRNA) targets. These interactions typically alter the stability of the target transcripts. We show that Hfq mediates the destabilization of the mRNA encoding the quorum-sensing master regulators LuxR (V. harveyi) and HapR (V. cholerae), implicating an sRNA in the circuit. Using a bioinformatics approach to identify putative sRNAs, we identified four candidate sRNAs in V. cholerae. The simultaneous deletion of all four sRNAs is required to stabilize hapR mRNA. We propose that Hfq, together with these sRNAs, creates an ultrasensitive regulatory switch that controls the critical transition into the high cell density, quorum-sensing mode.", "title": "" }, { "docid": "neg:1840281_10", "text": "Visual target tracking is one of the major fields in computer vision system. Object tracking has many practical applications such as automated surveillance system, military guidance, traffic management system, fault detection system, artificial intelligence and robot vision system. But it is difficult to track objects with image sensor. Especially, multiple objects tracking is harder than single object tracking. This paper proposes multiple objects tracking algorithm based on the Kalman filter. Our algorithm uses the Kalman filter as many as the number of moving objects in the image frame. If many moving objects exist in the image, however, we obtain multiple measurements. Therefore, precise data association is necessary in order to track multiple objects correctly. Another problem of multiple objects tracking is occlusion that causes merge and split. For solving these problems, this paper defines the cost function using some factors. Experiments using Matlab show that the performance of the proposed algorithm is appropriate for multiple objects tracking in real-time.", "title": "" }, { "docid": "neg:1840281_11", "text": "This paper presents Disco, a prototype for supporting knowledge workers in exploring, reviewing and sorting collections of textual data. The goal is to facilitate, accelerate and improve the discovery of information. To this end, it combines Semantic Relatedness techniques with a review workflow developed in a tangible environment. Disco uses a semantic model that is leveraged on-line in the course of search sessions, and accessed through natural hand-gesture, in a simple and intuitive way.", "title": "" }, { "docid": "neg:1840281_12", "text": "In human fingertips, the fingerprint patterns and interlocked epidermal-dermal microridges play a critical role in amplifying and transferring tactile signals to various mechanoreceptors, enabling spatiotemporal perception of various static and dynamic tactile signals. Inspired by the structure and functions of the human fingertip, we fabricated fingerprint-like patterns and interlocked microstructures in ferroelectric films, which can enhance the piezoelectric, pyroelectric, and piezoresistive sensing of static and dynamic mechanothermal signals. Our flexible and microstructured ferroelectric skins can detect and discriminate between multiple spatiotemporal tactile stimuli including static and dynamic pressure, vibration, and temperature with high sensitivities. As proof-of-concept demonstration, the sensors have been used for the simultaneous monitoring of pulse pressure and temperature of artery vessels, precise detection of acoustic sounds, and discrimination of various surface textures. Our microstructured ferroelectric skins may find applications in robotic skins, wearable sensors, and medical diagnostic devices.", "title": "" }, { "docid": "neg:1840281_13", "text": "Reason and emotion have long been considered opposing forces. However, recent psychological and neuroscientific research has revealed that emotion and cognition are closely intertwined. Cognitive processing is needed to elicit emotional responses. At the same time, emotional responses modulate and guide cognition to enable adaptive responses to the environment. Emotion determines how we perceive our world, organise our memory, and make important decisions. In this review, we provide an overview of current theorising and research in the Affective Sciences. We describe how psychological theories of emotion conceptualise the interactions of cognitive and emotional processes. We then review recent research investigating how emotion impacts our perception, attention, memory, and decision-making. Drawing on studies with both healthy participants and clinical populations, we illustrate the mechanisms and neural substrates underlying the interactions of cognition and emotion.", "title": "" }, { "docid": "neg:1840281_14", "text": "Studies using Nomura et al.’s “Negative Attitude toward Robots Scale” (NARS) [1] as an attitudinal measure have featured robots that were perceived to be autonomous, independent agents. State of the art telepresence robots require an explicit human-in-the-loop to drive the robot around. In this paper, we investigate if NARS can be used with telepresence robots. To this end, we conducted three studies in which people watched videos of telepresence robots (n=70), operated telepresence robots (n=38), and interacted with telepresence robots (n=12). Overall, the results from our three studies indicated that NARS may be applied to telepresence robots, and culture, gender, and prior robot experience can be influential factors on the NARS score.", "title": "" }, { "docid": "neg:1840281_15", "text": "This paper addresses the problem of simultaneous 3D reconstruction and material recognition and segmentation. Enabling robots to recognise different materials (concrete, metal etc.) in a scene is important for many tasks, e.g. robotic interventions in nuclear decommissioning. Previous work on 3D semantic reconstruction has predominantly focused on recognition of everyday domestic objects (tables, chairs etc.), whereas previous work on material recognition has largely been confined to single 2D images without any 3D reconstruction. Meanwhile, most 3D semantic reconstruction methods rely on computationally expensive post-processing, using Fully-Connected Conditional Random Fields (CRFs), to achieve consistent segmentations. In contrast, we propose a deep learning method which performs 3D reconstruction while simultaneously recognising different types of materials and labeling them at the pixel level. Unlike previous methods, we propose a fully end-to-end approach, which does not require hand-crafted features or CRF post-processing. Instead, we use only learned features, and the CRF segmentation constraints are incorporated inside the fully end-to-end learned system. We present the results of experiments, in which we trained our system to perform real-time 3D semantic reconstruction for 23 different materials in a real-world application. The run-time performance of the system can be boosted to around 10Hz, using a conventional GPU, which is enough to achieve realtime semantic reconstruction using a 30fps RGB-D camera. To the best of our knowledge, this work is the first real-time end-to-end system for simultaneous 3D reconstruction and material recognition.", "title": "" }, { "docid": "neg:1840281_16", "text": "Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure-Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron-Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.", "title": "" }, { "docid": "neg:1840281_17", "text": "This work attempts to address two fundamental questions about the structure of the convolutional neural networks (CNN): 1) why a nonlinear activation function is essential at the filter output of all intermediate layers? 2) what is the advantage of the two-layer cascade system over the one-layer system? A mathematical model called the “REctified-COrrelations on a Sphere” (RECOS) is proposed to answer these two questions. After the CNN training process, the converged filter weights define a set of anchor vectors in the RECOS model. Anchor vectors represent the frequently occurring patterns (or the spectral components). The necessity of rectification is explained using the RECOS model. Then, the behavior of a two-layer RECOS system is analyzed and compared with its one-layer counterpart. The LeNet-5 and the MNIST dataset are used to illustrate discussion points. Finally, the RECOS model is generalized to a multilayer system with the AlexNet as an example.", "title": "" }, { "docid": "neg:1840281_18", "text": "Changes in synaptic connections are considered essential for learning and memory formation. However, it is unknown how neural circuits undergo continuous synaptic changes during learning while maintaining lifelong memories. Here we show, by following postsynaptic dendritic spines over time in the mouse cortex, that learning and novel sensory experience lead to spine formation and elimination by a protracted process. The extent of spine remodelling correlates with behavioural improvement after learning, suggesting a crucial role of synaptic structural plasticity in memory formation. Importantly, a small fraction of new spines induced by novel experience, together with most spines formed early during development and surviving experience-dependent elimination, are preserved and provide a structural basis for memory retention throughout the entire life of an animal. These studies indicate that learning and daily sensory experience leave minute but permanent marks on cortical connections and suggest that lifelong memories are stored in largely stably connected synaptic networks.", "title": "" }, { "docid": "neg:1840281_19", "text": "Electronic Support Measures (ESM) system is an important function of electronic warfare which provides the real time projection of radar activities. Such systems may encounter with very high density pulse sequences and it is the main task of an ESM system to deinterleave these mixed pulse trains with high accuracy and minimum computation time. These systems heavily depend on time of arrival analysis and need efficient clustering algorithms to assist deinterleaving process in modern evolving environments. On the other hand, self organizing neural networks stand very promising for this type of radar pulse clustering. In this study, performances of self organizing neural networks that meet such clustering criteria are evaluated in detail and the results are presented.", "title": "" } ]
1840282
Very High Frame Rate Volumetric Integration of Depth Images on Mobile Devices
[ { "docid": "pos:1840282_0", "text": "In this paper, we propose a complete on-device 3D reconstruction pipeline for mobile monocular hand-held devices, which generates dense 3D models with absolute scale on-site while simultaneously supplying the user with real-time interactive feedback. The method fills a gap in current cloud-based mobile reconstruction services as it ensures at capture time that the acquired image set fulfills desired quality and completeness criteria. In contrast to existing systems, the developed framework offers multiple innovative solutions. In particular, we investigate the usability of the available on-device inertial sensors to make the tracking and mapping process more resilient to rapid motions and to estimate the metric scale of the captured scene. Moreover, we propose an efficient and accurate scheme for dense stereo matching which allows to reduce the processing time to interactive speed. We demonstrate the performance of the reconstruction pipeline on multiple challenging indoor and outdoor scenes of different size and depth variability.", "title": "" }, { "docid": "pos:1840282_1", "text": "MonoFusion allows a user to build dense 3D reconstructions of their environment in real-time, utilizing only a single, off-the-shelf web camera as the input sensor. The camera could be one already available in a tablet, phone, or a standalone device. No additional input hardware is required. This removes the need for power intensive active sensors that do not work robustly in natural outdoor lighting. Using the input stream of the camera we first estimate the 6DoF camera pose using a sparse tracking method. These poses are then used for efficient dense stereo matching between the input frame and a key frame (extracted previously). The resulting dense depth maps are directly fused into a voxel-based implicit model (using a computationally inexpensive method) and surfaces are extracted per frame. The system is able to recover from tracking failures as well as filter out geometrically inconsistent noise from the 3D reconstruction. Our method is both simple to implement and efficient, making such systems even more accessible. This paper details the algorithmic components that make up our system and a GPU implementation of our approach. Qualitative results demonstrate high quality reconstructions even visually comparable to active depth sensor-based systems such as KinectFusion.", "title": "" }, { "docid": "pos:1840282_2", "text": "Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.", "title": "" }, { "docid": "pos:1840282_3", "text": "We present a direct monocular visual odometry system which runs in real-time on a smartphone. Being a direct method, it tracks and maps on the images themselves instead of extracted features such as keypoints. New images are tracked using direct image alignment, while geometry is represented in the form of a semi-dense depth map. Depth is estimated by filtering over many small-baseline, pixel-wise stereo comparisons. This leads to significantly less outliers and allows to map and use all image regions with sufficient gradient, including edges. We show how a simple world model for AR applications can be derived from semi-dense depth maps, and demonstrate the practical applicability in the context of an AR application in which simulated objects can collide with real geometry.", "title": "" } ]
[ { "docid": "neg:1840282_0", "text": "Research in texture recognition often concentrates on the problem of material recognition in uncluttered conditions, an assumption rarely met by applications. In this work we conduct a first study of material and describable texture attributes recognition in clutter, using a new dataset derived from the OpenSurface texture repository. Motivated by the challenge posed by this problem, we propose a new texture descriptor, FV-CNN, obtained by Fisher Vector pooling of a Convolutional Neural Network (CNN) filter bank. FV-CNN substantially improves the state-of-the-art in texture, material and scene recognition. Our approach achieves 79.8% accuracy on Flickr material dataset and 81% accuracy on MIT indoor scenes, providing absolute gains of more than 10% over existing approaches. FV-CNN easily transfers across domains without requiring feature adaptation as for methods that build on the fully-connected layers of CNNs. Furthermore, FV-CNN can seamlessly incorporate multi-scale information and describe regions of arbitrary shapes and sizes. Our approach is particularly suited at localizing “stuff” categories and obtains state-of-the-art results on MSRC segmentation dataset, as well as promising results on recognizing materials and surface attributes in clutter on the OpenSurfaces dataset.", "title": "" }, { "docid": "neg:1840282_1", "text": "Heterogeneous networks are widely used to model real-world semi-structured data. The key challenge of learning over such networks is the modeling of node similarity under both network structures and contents. To deal with network structures, most existing works assume a given or enumerable set of meta-paths and then leverage them for the computation of meta-path-based proximities or network embeddings. However, expert knowledge for given meta-paths is not always available, and as the length of considered meta-paths increases, the number of possible paths grows exponentially, which makes the path searching process very costly. On the other hand, while there are often rich contents around network nodes, they have hardly been leveraged to further improve similarity modeling. In this work, to properly model node similarity in content-rich heterogeneous networks, we propose to automatically discover useful paths for pairs of nodes under both structural and content information. To this end, we combine continuous reinforcement learning and deep content embedding into a novel semi-supervised joint learning framework. Specifically, the supervised reinforcement learning component explores useful paths between a small set of example similar pairs of nodes, while the unsupervised deep embedding component captures node contents and enables inductive learning on the whole network. The two components are jointly trained in a closed loop to mutually enhance each other. Extensive experiments on three real-world heterogeneous networks demonstrate the supreme advantages of our algorithm.", "title": "" }, { "docid": "neg:1840282_2", "text": "One of the major drawbacks of magnetic resonance imaging (MRI) has been the lack of a standard and quantifiable interpretation of image intensities. Unlike in other modalities, such as X-ray computerized tomography, MR images taken for the same patient on the same scanner at different times may appear different from each other due to a variety of scanner-dependent variations and, therefore, the absolute intensity values do not have a fixed meaning. The authors have devised a two-step method wherein all images (independent of patients and the specific brand of the MR scanner used) can be transformed in such a may that for the same protocol and body region, in the transformed images similar intensities will have similar tissue meaning. Standardized images can be displayed with fixed windows without the need of per-case adjustment. More importantly, extraction of quantitative information about healthy organs or about abnormalities can be considerably simplified. This paper introduces and compares new variants of this standardizing method that can help to overcome some of the problems with the original method.", "title": "" }, { "docid": "neg:1840282_3", "text": "A long-standing problem at the interface of artificial intelligence and applied mathematics is to devise an algorithm capable of achieving human level or even superhuman proficiency in transforming observed data into predictive mathematical models of the physical world. In the current era of abundance of data and advanced machine learning capabilities, the natural question arises: How can we automatically uncover the underlying laws of physics from high-dimensional data generated from experiments? In this work, we put forth a deep learning approach for discovering nonlinear partial differential equations from scattered and potentially noisy observations in space and time. Specifically, we approximate the unknown solution as well as the nonlinear dynamics by two deep neural networks. The first network acts as a prior on the unknown solution and essentially enables us to avoid numerical differentiations which are inherently ill-conditioned and unstable. The second network represents the nonlinear dynamics and helps us distill the mechanisms that govern the evolution of a given spatiotemporal data-set. We test the effectiveness of our approach for several benchmark problems spanning a number of scientific domains and demonstrate how the proposed framework can help us accurately learn the underlying dynamics and forecast future states of the system. In particular, we study the Burgers’, Kortewegde Vries (KdV), Kuramoto-Sivashinsky, nonlinear Schrödinger, and NavierStokes equations.", "title": "" }, { "docid": "neg:1840282_4", "text": "We investigate the task of Named Entity Recognition (NER) in the domain of biomedical text. There is little published work employing modern neural network techniques in this domain, probably due to the small sizes of human-labeled data sets, as non-trivial neural models would have great difficulty avoiding overfitting. In this work we follow a semi-supervised learning approach: We first train state-of-the art (deep) neural networks on a large corpus of noisy machine-labeled data, then “transfer” and fine-tune the learned model on two higher-quality humanlabeled data sets. This approach yields higher performance than the current best published systems for the class DISEASE. It trails but is not far from the currently best systems for the class CHEM.", "title": "" }, { "docid": "neg:1840282_5", "text": "A new super-concentrated aqueous electrolyte is proposed by introducing a second lithium salt. The resultant ultra-high concentration of 28 m led to more effective formation of a protective interphase on the anode along with further suppression of water activities at both anode and cathode surfaces. The improved electrochemical stability allows the use of TiO2 as the anode material, and a 2.5 V aqueous Li-ion cell based on LiMn2 O4 and carbon-coated TiO2 delivered the unprecedented energy density of 100 Wh kg(-1) for rechargeable aqueous Li-ion cells, along with excellent cycling stability and high coulombic efficiency. It has been demonstrated that the introduction of a second salts into the \"water-in-salt\" electrolyte further pushed the energy densities of aqueous Li-ion cells closer to those of the state-of-the-art Li-ion batteries.", "title": "" }, { "docid": "neg:1840282_6", "text": "One of the core questions when designing modern Natural Language Processing (NLP) systems is how to model input textual data such that the learning algorithm is provided with enough information to estimate accurate decision functions. The mainstream approach is to represent input objects as feature vectors where each value encodes some of their aspects, e.g., syntax, semantics, etc. Feature-based methods have demonstrated state-of-the-art results on various NLP tasks. However, designing good features is a highly empirical-driven process, it greatly depends on a task requiring a significant amount of domain expertise. Moreover, extracting features for complex NLP tasks often requires expensive pre-processing steps running a large number of linguistic tools while relying on external knowledge sources that are often not available or hard to get. Hence, this process is not cheap and often constitutes one of the major challenges when attempting a new task or adapting to a different language or domain. The problem of modelling input objects is even more acute in cases when the input examples are not just single objects but pairs of objects, such as in various learning to rank problems in Information Retrieval and Natural Language processing. An alternative to feature-based methods is using kernels which are essentially nonlinear functions mapping input examples into some high dimensional space thus allowing for learning decision functions with higher discriminative power. Kernels implicitly generate a very large number of features computing similarity between input examples in that implicit space. A well-designed kernel function can greatly reduce the effort to design a large set of manually designed features often leading to superior results. However, in the recent years, the use of kernel methods in NLP has been greatly underestimated primarily due to the following reasons: (i) learning with kernels is slow as it requires to carry out optimizaiton in the dual space leading to quadratic complexity; (ii) applying kernels to the input objects encoded with vanilla structures, e.g., generated by syntactic parsers, often yields minor improvements over carefully designed feature-based methods. In this thesis, we adopt the kernel learning approach for solving complex NLP tasks and primarily focus on solutions to the aforementioned problems posed by the use of kernels. In particular, we design novel learning algorithms for training Support Vector Machines with structural kernels, e.g., tree kernels, considerably speeding up the training over the conventional SVM training methods. We show that using the training algorithms developed in this thesis allows for trainining tree kernel models on large-scale datasets containing millions of instances, which was not possible before. Next, we focus on the problem of designing input structures that are fed to tree kernel functions to automatically generate a large set of tree-fragment features. We demonstrate that previously used plain structures generated by syntactic parsers, e.g., syntactic or dependency trees, are often a poor choice thus compromising the expressivity offered by a tree kernel learning framework. We propose several effective design patterns of the input tree structures for various NLP tasks ranging from sentiment analysis to answer passage reranking. The central idea is to inject additional semantic information relevant for the task directly into the tree nodes and let the expressive kernels generate rich feature spaces. For the opinion mining tasks, the additional semantic information injected into tree nodes can be word polarity labels, while for more complex tasks of modelling text pairs the relational information about overlapping words in a pair appears to significantly improve the accuracy of the resulting models. Finally, we observe that both feature-based and kernel methods typically treat words as atomic units where matching different yet semantically similar words is problematic. Conversely, the idea of distributional approaches to model words as vectors is much more effective in establishing a semantic match between words and phrases. While tree kernel functions do allow for a more flexible matching between phrases and sentences through matching their syntactic contexts, their representation can not be tuned on the training set as it is possible with distributional approaches. Recently, deep learning approaches have been applied to generalize the distributional word matching problem to matching sentences taking it one step further by learning the optimal sentence representations for a given task. Deep neural networks have already claimed state-of-the-art performance in many computer vision, speech recognition, and natural language tasks. Following this trend, this thesis also explores the virtue of deep learning architectures for modelling input texts and text pairs where we build on some of the ideas to model input objects proposed within the tree kernel learning framework. In particular, we explore the idea of relational linking (proposed in the preceding chapters to encode text pairs using linguistic tree structures) to design a state-of-the-art deep learning architecture for modelling text pairs. We compare the proposed deep learning models that require even less manual intervention in the feature design process then previously described tree kernel methods that already offer a very good trade-off between the feature-engineering effort and the expressivity of the resulting representation. Our deep learning models demonstrate the state-of-the-art performance on a recent benchmark for Twitter Sentiment Analysis, Answer Sentence Selection and Microblog retrieval.", "title": "" }, { "docid": "neg:1840282_7", "text": "Speech activity detection (SAD) on channel transmissions is a critical preprocessing task for speech, speaker and language recognition or for further human analysis. This paper presents a feature combination approach to improve SAD on highly channel degraded speech as part of the Defense Advanced Research Projects Agency’s (DARPA) Robust Automatic Transcription of Speech (RATS) program. The key contribution is the feature combination exploration of different novel SAD features based on pitch and spectro-temporal processing and the standard Mel Frequency Cepstral Coefficients (MFCC) acoustic feature. The SAD features are: (1) a GABOR feature representation, followed by a multilayer perceptron (MLP); (2) a feature that combines multiple voicing features and spectral flux measures (Combo); (3) a feature based on subband autocorrelation (SAcC) and MLP postprocessing and (4) a multiband comb-filter F0 (MBCombF0) voicing measure. We present single, pairwise and all feature combinations, show high error reductions from pairwise feature level combination over the MFCC baseline and show that the best performance is achieved by the combination of all features.", "title": "" }, { "docid": "neg:1840282_8", "text": "The continuing growth of World Wide Web and on-line text collections makes a large volume of information available to users. Automatic text summarization allows users to quickly understand documents. In this paper, we propose an automated technique for single document summarization which combines content-based and graph-based approaches and introduce the Hopfield network algorithm as a technique for ranking text segments. A series of experiments are performed using the DUC collection and a Thai-document collection. The results show the superiority of the proposed technique over reference systems, in addition the Hopfield network algorithm on undirected graph is shown to be the best text segment ranking algorithm in the study", "title": "" }, { "docid": "neg:1840282_9", "text": "A 32-year-old pregnant woman in the 25th week of pregnancy underwent oral glucose tolerance screening at the diabetologist's. Later that day, she was found dead in her apartment possibly poisoned with Chlumsky disinfectant solution (solutio phenoli camphorata). An autopsy revealed chemical burns in the digestive system. The lungs and the brain showed signs of severe edema. The blood of the woman and fetus was analyzed using gas chromatography with mass spectrometry and revealed phenol, its metabolites (phenyl glucuronide and phenyl sulfate) and camphor. No ethanol was found in the blood samples. Both phenol and camphor are contained in Chlumsky disinfectant solution, which is used for disinfecting surgical equipment in healthcare facilities. Further investigation revealed that the deceased woman had been accidentally administered a disinfectant instead of a glucose solution by the nurse, which resulted in acute intoxication followed by the death of the pregnant woman and the fetus.", "title": "" }, { "docid": "neg:1840282_10", "text": "Variants of Hirschsprung disease are conditions that clinically resemble Hirschsprung disease, despite the presence of ganglion cells in rectal suction biopsies. The characterization and differentiation of various entities are mainly based on histologic, immunohistochemical, and electron microscopy findings of biopsies from patients with functional intestinal obstruction. Intestinal neuronal dysplasia is histologically characterized by hyperganglionosis, giant ganglia, and ectopic ganglion cells. In most intestinal neuronal dysplasia cases, conservative treatments such as laxatives and enema are sufficient. Some patients may require internal sphincter myectomy. Patients with the diagnosis of isolated hypoganglionosis show decreased numbers of nerve cells, decreased plexus area, as well as increased distance between ganglia in rectal biopsies, and resection of the affected segment has been the treatment of choice. The diagnosis of internal anal sphincter achalasia is based on abnormal rectal manometry findings, whereas rectal suction biopsies display presence of ganglion cells as well as normal acetylcholinesterase activity. Internal anal sphincter achalasia is either treated by internal sphincter myectomy or botulinum toxin injection. Megacystis microcolon intestinal hypoperistalsis is a rare condition, and the most severe form of functional intestinal obstruction in the newborn. Megacystis microcolon intestinal hypoperistalsis is characterized by massive abdominal distension caused by a largely dilated nonobstructed bladder, microcolon, and decreased or absent intestinal peristalsis. Although the outcome has improved in recent years, survivors have to be either maintained by total parenteral nutrition or have undergone multivisceral transplant. This review article summarizes the current knowledge of the aforementioned entities of variant HD.", "title": "" }, { "docid": "neg:1840282_11", "text": "170 undergraduate students completed the Boredom Proneness Scale by Farmer and Sundberg and the Multiple Affect Adjective Checklist by Zuckerman and Lubin. Significant negative relationships were found between boredom proneness and negative affect scores (i.e., Depression, Hostility, Anxiety). Significant positive correlations also obtained between boredom proneness and positive affect (i.e., Positive Affect, Sensation Seeking). The correlations between boredom proneness \"subscales\" and positive and negative affect were congruent with those obtained using total boredom proneness scores. Implications for counseling are discussed.", "title": "" }, { "docid": "neg:1840282_12", "text": "With object storage services becoming increasingly accepted as replacements for traditional file or block systems, it is important to effectively measure the performance of these services. Thus people can compare different solutions or tune their systems for better performance. However, little has been reported on this specific topic as yet. To address this problem, we present COSBench (Cloud Object Storage Benchmark), a benchmark tool that we are currently working on in Intel for cloud object storage services. In addition, in this paper, we also share the results of the experiments we have performed so far.", "title": "" }, { "docid": "neg:1840282_13", "text": "This paper demonstrates six-metal-layer antenna-to-receiver signal transitions on panel-scale processed ultra-thin glass-based 5G module substrates with 50-Ω transmission lines and micro-via transitions in re-distribution layers. The glass modules consist of low-loss dielectric thin-films laminated on 100-μm glass cores. Modeling, design, fabrication, and characterization of the multilayered signal interconnects were performed at 28-GHz band. The surface planarity and dimensional stability of glass substrates enabled the fabrication of highly-controlled signal traces with tolerances of 2% inside the re-distribution layers on low-loss dielectric build-up thin-films. The fabricated transmission lines showed 0.435 dB loss with 4.19 mm length, while microvias in low-loss dielectric thin-films showed 0.034 dB/microvia. The superiority of glass substrates enable low-loss link budget with high precision from chip to antenna for 5G communications.", "title": "" }, { "docid": "neg:1840282_14", "text": "We present a simple zero-knowledge proof of knowledge protocol of which many protocols in the literature are instantiations. These include Schnorr’s protocol for proving knowledge of a discrete logarithm, the Fiat-Shamir and Guillou-Quisquater protocols for proving knowledge of a modular root, protocols for proving knowledge of representations (like Okamoto’s protocol), protocols for proving equality of secret values, a protocol for proving the correctness of a Diffie-Hellman key, protocols for proving the multiplicative relation of three commitments (as required in secure multi-party computation), and protocols used in credential systems. This shows that a single simple treatment (and proof), at a high level of abstraction, can replace the individual previous treatments. Moreover, one can devise new instantiations of the protocol.", "title": "" }, { "docid": "neg:1840282_15", "text": "Edge preserving filters preserve the edges and its information while blurring an image. In other words they are used to smooth an image, while reducing the edge blurring effects across the edge like halos, phantom etc. They are nonlinear in nature. Examples are bilateral filter, anisotropic diffusion filter, guided filter, trilateral filter etc. Hence these family of filters are very useful in reducing the noise in an image making it very demanding in computer vision and computational photography applications like denoising, video abstraction, demosaicing, optical-flow estimation, stereo matching, tone mapping, style transfer, relighting etc. This paper provides a concrete introduction to edge preserving filters starting from the heat diffusion equation in olden to recent eras, an overview of its numerous applications, as well as mathematical analysis, various efficient and optimized ways of implementation and their interrelationships, keeping focus on preserving the boundaries, spikes and canyons in presence of noise. Furthermore it provides a realistic notion for efficient implementation with a research scope for hardware realization for further acceleration.", "title": "" }, { "docid": "neg:1840282_16", "text": "The paper describes a 2D sound source mapping system for a mobile robot. We developed a multiple sound sources localization method for a mobile robot with a 32 channel concentric microphone array. The system can separate multiple moving sound sources using direction localization. Directional localization and separation of different pressure sound sources is achieved using the delay and sum beam forming (DSBF) and the frequency band selection (FBS) algorithm. Sound sources were mapped by using a wheeled robot equipped with the microphone array. The robot localizes sounds direction on the move and estimates sound sources position using triangulation. Assuming the movement of sound sources, the system set a time limit and uses only the last few seconds data. By using the random sample consensus (RANSAC) algorithm for position estimation, we achieved 2D multiple sound source mapping from time limited data with high accuracy. Also, moving sound source separation is experimentally demonstrated with segments of the DSBF enhanced signal derived from the localization process", "title": "" }, { "docid": "neg:1840282_17", "text": "The degree to which perceptual awareness of threat stimuli and bodily states of arousal modulates neural activity associated with fear conditioning is unknown. We used functional magnetic neuroimaging (fMRI) to study healthy subjects and patients with peripheral autonomic denervation to examine how the expression of conditioning-related activity is modulated by stimulus awareness and autonomic arousal. In controls, enhanced amygdala activity was evident during conditioning to both \"seen\" (unmasked) and \"unseen\" (backward masked) stimuli, whereas insula activity was modulated by perceptual awareness of a threat stimulus. Absent peripheral autonomic arousal, in patients with autonomic denervation, was associated with decreased conditioning-related activity in insula and amygdala. The findings indicate that the expression of conditioning-related neural activity is modulated by both awareness and representations of bodily states of autonomic arousal.", "title": "" }, { "docid": "neg:1840282_18", "text": "The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or “things”. While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper, we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future.", "title": "" }, { "docid": "neg:1840282_19", "text": "Steep, soil-mantled hillslopes evolve through the downslope movement of soil, driven largely by slope-dependent ransport processes. Most landscape evolution models represent hillslope transport by linear diffusion, in which rates of sediment transport are proportional to slope, such that equilibrium hillslopes should have constant curvature between divides and channels. On many soil-mantled hillslopes, however, curvature appears to vary systematically, such that slopes are typically convex near the divide and become increasingly planar downslope. This suggests that linear diffusion is not an adequate model to describe the entire morphology of soil-mantled hillslopes. Here we show that the interaction between local disturbances (such as rainsplash and biogenic activity) and frictional and gravitational forces results in a diffusive transport law that depends nonlinearly on hillslope gradient. Our proposed transport law (1) approximates linear diffusion at low gradients and (2) indicates that sediment flux increases rapidly as gradient approaches a critical value. We calibrated and tested this transport law using high-resolution topographic data from the Oregon Coast Range. These data, obtained by airborne laser altimetry, allow us to characterize hillslope morphology at •2 m scale. At five small basins in our study area, hillslope curvature approaches zero with increasing gradient, consistent with our proposed nonlinear diffusive transport law. Hillslope gradients tend to cluster near values for which sediment flux increases rapidly with slope, such that large changes in erosion rate will correspond to small changes in gradient. Therefore average hillslope gradient is unlikely to be a reliable indicator of rates of tectonic forcing or baselevel owering. Where hillslope erosion is dominated by nonlinear diffusion, rates of tectonic forcing will be more reliably reflected in hillslope curvature near the divide rather than average hillslope gradient.", "title": "" } ]
1840283
Estimation of Arrival Flight Delay and Delay Propagation in a Busy Hub-Airport
[ { "docid": "pos:1840283_0", "text": "The National Airspace System (NAS) is a large and complex system with thousands of interrelated components: administration, control centers, airports, airlines, aircraft, passengers, etc. The complexity of the NAS creates many difficulties in management and control. One of the most pressing problems is flight delay. Delay creates high cost to airlines, complaints from passengers, and difficulties for airport operations. As demand on the system increases, the delay problem becomes more and more prominent. For this reason, it is essential for the Federal Aviation Administration to understand the causes of delay and to find ways to reduce delay. Major contributing factors to delay are congestion at the origin airport, weather, increasing demand, and air traffic management (ATM) decisions such as the Ground Delay Programs (GDP). Delay is an inherently stochastic phenomenon. Even if all known causal factors could be accounted for, macro-level national airspace system (NAS) delays could not be predicted with certainty from micro-level aircraft information. This paper presents a stochastic model that uses Bayesian Networks (BNs) to model the relationships among different components of aircraft delay and the causal factors that affect delays. A case study on delays of departure flights from Chicago O’Hare international airport (ORD) to Hartsfield-Jackson Atlanta International Airport (ATL) reveals how local and system level environmental and human-caused factors combine to affect components of delay, and how these components contribute to the final arrival delay at the destination airport.", "title": "" } ]
[ { "docid": "neg:1840283_0", "text": "This paper describes an <i>analogy ontology</i>, a formal representation of some key ideas in analogical processing, that supports the integration of analogical processing with first-principles reasoners. The ontology is based on Gentner's <i>structure-mapping</i> theory, a psychological account of analogy and similarity. The semantics of the ontology are enforced via procedural attachment, using cognitive simulations of structure-mapping to provide analogical processing services. Queries that include analogical operations can be formulated in the same way as standard logical inference, and analogical processing systems in turn can call on the services of first-principles reasoners for creating cases and validating their conjectures. We illustrate the utility of the analogy ontology by demonstrating how it has been used in three systems: A crisis management analogical reasoner that answers questions about international incidents, a course of action analogical critiquer that provides feedback about military plans, and a comparison question-answering system for knowledge capture. These systems rely on large, general-purpose knowledge bases created by other research groups, thus demonstrating the generality and utility of these ideas.", "title": "" }, { "docid": "neg:1840283_1", "text": "In recent years research on human activity recognition using wearable sensors has enabled to achieve impressive results on real-world data. However, the most successful activity recognition algorithms require substantial amounts of labeled training data. The generation of this data is not only tedious and error prone but also limits the applicability and scalability of today's approaches. This paper explores and systematically analyzes two different techniques to significantly reduce the required amount of labeled training data. The first technique is based on semi-supervised learning and uses self-training and co-training. The second technique is inspired by active learning. In this approach the system actively asks which data the user should label. With both techniques, the required amount of training data can be reduced significantly while obtaining similar and sometimes even better performance than standard supervised techniques. The experiments are conducted using one of the largest and richest currently available datasets.", "title": "" }, { "docid": "neg:1840283_2", "text": "Nanonetworks, i.e., networks of nano-sized devices, are the enabling technology of long-awaited applications in the biological, industrial and military fields. For the time being, the size and power constraints of nano-devices limit the applicability of classical wireless communication in nanonetworks. Alternatively, nanomaterials can be used to enable electromagnetic (EM) communication among nano-devices. In this paper, a novel graphene-based nano-antenna, which exploits the behavior of Surface Plasmon Polariton (SPP) waves in semi-finite size Graphene Nanoribbons (GNRs), is proposed, modeled and analyzed. First, the conductivity of GNRs is analytically and numerically studied by starting from the Kubo formalism to capture the impact of the electron lateral confinement in GNRs. Second, the propagation of SPP waves in GNRs is analytically and numerically investigated, and the SPP wave vector and propagation length are computed. Finally, the nano-antenna is modeled as a resonant plasmonic cavity, and its frequency response is determined. The results show that, by exploiting the high mode compression factor of SPP waves in GNRs, graphene-based plasmonic nano-antennas are able to operate at much lower frequencies than their metallic counterparts, e.g., the Terahertz Band for a one-micrometer-long ten-nanometers-wide antenna. This result has the potential to enable EM communication in nanonetworks.", "title": "" }, { "docid": "neg:1840283_3", "text": "To improve FPGA performance for arithmetic circuits that are dominated by multi-input addition operations, an FPGA logic block is proposed that can be configured as a 6:2 or 7:2 compressor. Compressors have been used successfully in the past to realize parallel multipliers in VLSI technology; however, the peculiar structure of FPGA logic blocks, coupled with the high cost of the routing network relative to ASIC technology, renders compressors ineffective when mapped onto the general logic of an FPGA. On the other hand, current FPGA logic cells have already been enhanced with carry chains to improve arithmetic functionality, for example, to realize fast ternary carry-propagate addition. The contribution of this article is a new FPGA logic cell that is specialized to help realize efficient compressor trees on FPGAs. The new FPGA logic cell has two variants that can respectively be configured as a 6:2 or a 7:2 compressor using additional carry chains that, coupled with lookup tables, provide the necessary functionality. Experiments show that the use of these modified logic cells significantly reduces the delay of compressor trees synthesized on FPGAs compared to state-of-the-art synthesis techniques, with a moderate increase in area and power consumption.", "title": "" }, { "docid": "neg:1840283_4", "text": "We consider retrieving a specific temporal segment, or moment, from a video given a natural language text description. Methods designed to retrieve whole video clips with natural language determine what occurs in a video but not when. To address this issue, we propose the Moment Context Network (MCN) which effectively localizes natural language queries in videos by integrating local and global video features over time. A key obstacle to training our MCN model is that current video datasets do not include pairs of localized video segments and referring expressions, or text descriptions which uniquely identify a corresponding moment. Therefore, we collect the Distinct Describable Moments (DiDeMo) dataset which consists of over 10,000 unedited, personal videos in diverse visual settings with pairs of localized video segments and referring expressions. We demonstrate that MCN outperforms several baseline methods and believe that our initial results together with the release of DiDeMo will inspire further research on localizing video moments with natural language.", "title": "" }, { "docid": "neg:1840283_5", "text": "Relation Extraction is an important subtask of Information Extraction which has the potential of employing deep learning (DL) models with the creation of large datasets using distant supervision. In this review, we compare the contributions and pitfalls of the various DL models that have been used for the task, to help guide the path ahead.", "title": "" }, { "docid": "neg:1840283_6", "text": "In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets - interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.", "title": "" }, { "docid": "neg:1840283_7", "text": "Surprisingly little is understood about the physiologic and pathologic processes that involve intraoral sebaceous glands. Neoplasms are rare. Hyperplasia of these glands is undoubtedly more common, but criteria for the diagnosis of intraoral sebaceous hyperplasia have not been established. These lesions are too often misdiagnosed as large \"Fordyce granules\" or, when very large, as sebaceous adenomas. On the basis of a series of 31 nonneoplastic sebaceous lesions and on published data, the following definition is proposed: intraoral sebaceous hyperplasia occurs when a lesion, judged clinically to be a distinct abnormality that requires biopsy for diagnosis or confirmation of clinical impression, has histologic features of one or more well-differentiated sebaceous glands that exhibit no fewer than 15 lobules per gland. Sebaceous glands with fewer than 15 lobules that form an apparently distinct clinical lesion on the buccal mucosa are considered normal, whereas similar lesions of other intraoral sites are considered ectopic sebaceous glands. Sebaceous adenomas are less differentiated than sebaceous hyperplasia.", "title": "" }, { "docid": "neg:1840283_8", "text": "In this paper, a microstrip dipole antenna on a flexible organic substrate is proposed. The antenna arms are tilted to make different variations of the dipole with more compact size and almost same performance. The antennas are fed using a coplanar stripline (CPS) geometry (Simons, 2001). The antennas are then conformed over cylindrical surfaces and their performances are compared to their flat counterparts. Good performance is achieved for both the flat and conformal antennas.", "title": "" }, { "docid": "neg:1840283_9", "text": "A recent and very promising approach for combinatorial optimization is to embed local search into the framework of evolutionary algorithms. In this paper, we present such hybrid algorithms for the graph coloring problem. These algorithms combine a new class of highly specialized crossover operators and a well-known tabu search algorithm. Experiments of such a hybrid algorithm are carried out on large DIMACS Challenge benchmark graphs. Results prove very competitive with and even better than those of state-of-the-art algorithms. Analysis of the behavior of the algorithm sheds light on ways to further improvement.", "title": "" }, { "docid": "neg:1840283_10", "text": "Insulin-like growth factor 2 (IGF2) is a 7.5 kDa mitogenic peptide hormone expressed by liver and many other tissues. It is three times more abundant in serum than IGF1, but our understanding of its physiological and pathological roles has lagged behind that of IGF1. Expression of the IGF2 gene is strictly regulated. Over-expression occurs in many cancers and is associated with a poor prognosis. Elevated serum IGF2 is also associated with increased risk of developing various cancers including colorectal, breast, prostate and lung. There is established clinical utility for IGF2 measurement in the diagnosis of non-islet cell tumour hypoglycaemia, a condition characterised by a molar IGF2:IGF1 ratio O10. Recent advances in understanding of the pathophysiology of IGF2 in cancer have suggested much novel clinical utility for its measurement. Measurement of IGF2 in blood and genetic and epigenetic tests of the IGF2 gene may help assess cancer risk and prognosis. Further studies will determine whether these tests enter clinical practice. New therapeutic approaches are being developed to target IGF2 action. This review provides a clinical perspective on IGF2 and an update on recent research findings. Key Words", "title": "" }, { "docid": "neg:1840283_11", "text": "Recovering structure and motion parameters given a image pair or a sequence of images is a well studied problem in computer vision. This is often achieved by employing Structure from Motion (SfM) or Simultaneous Localization and Mapping (SLAM) algorithms based on the real-time requirements. Recently, with the advent of Convolutional Neural Networks (CNNs) researchers have explored the possibility of using machine learning techniques to reconstruct the 3D structure of a scene and jointly predict the camera pose. In this work, we present a framework that achieves state-of-the-art performance on single image depth prediction for both indoor and outdoor scenes. The depth prediction system is then extended to predict optical flow and ultimately the camera pose and trained end-to-end. Our framework outperforms previous deep-learning based motion prediction approaches, and we also demonstrate that the state-of-the-art metric depths can be further improved using the knowledge of pose.", "title": "" }, { "docid": "neg:1840283_12", "text": "AIM\nTetracycline-stained tooth structure is difficult to bleach using nightguard tray methods. The possible benefits of in-office light-accelerated bleaching systems based on the photo-Fenton reaction are of interest as possible adjunctive treatments. This study was a proof of concept for possible benefits of this approach, using dentine slabs from human tooth roots stained in a reproducible manner with the tetracycline antibiotic demeclocycline hydrochloride.\n\n\nMATERIALS AND METHODS\nColor changes overtime in tetra-cycline stained roots from single rooted teeth treated using gel (Zoom! WhiteSpeed(®)) alone, blue LED light alone, or gel plus light in combination were tracked using standardized digital photography. Controls received no treatment. Changes in color channel data were tracked overtime, for each treatment group (N = 20 per group).\n\n\nRESULTS\nDentin was lighter after bleaching, with significant improvements in the dentin color for the blue channel (yellow shade) followed by the green channel and luminosity. The greatest changes occurred with gel activated by light (p < 0.0001), which was superior to effects seen with gel alone. Use of the light alone did not significantly alter shade.\n\n\nCONCLUSION\nThis proof of concept study demonstrates that bleaching using the photo-Fenton chemistry is capable of lightening tetracycline-stained dentine. Further investigation of the use of this method for treating tetracycline-stained teeth in clinical settings appears warranted.\n\n\nCLINICAL SIGNIFICANCE\nBecause tetracycline staining may respond to bleaching treatments based on the photo-Fenton reaction, systems, such as Zoom! WhiteSpeed, may have benefits as adjuncts to home bleaching for patients with tetracycline-staining.", "title": "" }, { "docid": "neg:1840283_13", "text": "Introduction Various forms of social media are used by many mothers to maintain social ties and manage the stress associated with their parenting roles and responsibilities. ‘Mommy blogging’ as a specific type of social media usage is a common and growing phenomenon, but little is known about mothers’ blogging-related experiences and how these may contribute to their wellbeing. This exploratory study investigated the blogging-related motivations and goals of Australian mothers. Methods An online survey was emailed to members of an Australian online parenting community. The survey included open-ended questions that invited respondents to discuss their motivations and goals for blogging. A thematic analysis using a grounded approach was used to analyze the qualitative data obtained from 235 mothers. Results Five primary motivations for blogging were identified: developing connections with others, experiencing heightened levels of mental stimulation, achieving self-validation, contributing to the welfare of others, and extending skills and abilities. Discussion These motivations are discussed in terms of their various properties and dimensions to illustrate how these mothers appear to use blogging to enhance their psychological wellbeing.", "title": "" }, { "docid": "neg:1840283_14", "text": "We describe a computer system that provides a real-time musical accompaniment for a live soloist in a piece of non-improvised music for soloist and accompaniment. A Bayesian network is developed that represents the joint distribution on the times at which the solo and accompaniment notes are played, relating the two parts through a layer of hidden variables. The network is first constructed using the rhythmic information contained in the musical score. The network is then trained to capture the musical interpretations of the soloist and accompanist in an off-line rehearsal phase. During live accompaniment the learned distribution of the network is combined with a real-time analysis of the soloist's acoustic signal, performed with a hidden Markov model, to generate a musically principled accompaniment that respects all available sources of knowledge. A live demonstration will be provided.", "title": "" }, { "docid": "neg:1840283_15", "text": "The aim of this paper is to explore how well the task of text vs. nontext distinction can be solved in online handwritten documents using only offline information. Two systems are introduced. The first system generates a document segmentation first. For this purpose, four methods originally developed for machine printed documents are compared: x-y cut, morphological closing, Voronoi segmentation, and whitespace analysis. A state-of-the art classifier then distinguishes between text and non-text zones. The second system follows a bottom-up approach that classifies connected components. Experiments are performed on a new dataset of online handwritten documents containing different content types in arbitrary arrangements. The best system assigns 94.3% of the pixels to the correct class.", "title": "" }, { "docid": "neg:1840283_16", "text": "Recently, there has been a growing interest in end-to-end speech recognition that directly transcribes speech to text without any predefined alignments. In this paper, we explore the use of attention-based encoder-decoder model for Mandarin speech recognition on a voice search task. Previous attempts have shown that applying attention-based encoder-decoder to Mandarin speech recognition was quite difficult due to the logographic orthography of Mandarin, the large vocabulary and the conditional dependency of the attention model. In this paper, we use character embedding to deal with the large vocabulary. Several tricks are used for effective model training, including L2 regularization, Gaussian weight noise and frame skipping. We compare two attention mechanisms and use attention smoothing to cover long context in the attention model. Taken together, these tricks allow us to finally achieve a character error rate (CER) of 3.58% and a sentence error rate (SER) of 7.43% on the MiTV voice search dataset. While together with a trigram language model, CER and SER reach 2.81% and 5.77%, respectively.", "title": "" }, { "docid": "neg:1840283_17", "text": "3D object detection and pose estimation from a single image are two inherently ambiguous problems. Oftentimes, objects appear similar from different viewpoints due to shape symmetries, occlusion and repetitive textures. This ambiguity in both detection and pose estimation means that an object instance can be perfectly described by several different poses and even classes. In this work we propose to explicitly deal with this uncertainty. For each object instance we predict multiple pose and class outcomes to estimate the specific pose distribution generated by symmetries and repetitive textures. The distribution collapses to a single outcome when the visual appearance uniquely identifies just one valid pose. We show the benefits of our approach which provides not only a better explanation for pose ambiguity, but also a higher accuracy in terms of pose estimation.", "title": "" }, { "docid": "neg:1840283_18", "text": "In recent years, online reviews have become the most important resource of customers’ opinions. These reviews are used increasingly by individuals and organizations to make purchase and business decisions. Unfortunately, driven by the desire for profit or publicity, fraudsters have produced deceptive (spam) reviews. The fraudsters’ activities mislead potential customers and organizations reshaping their businesses and prevent opinion-mining techniques from reaching accurate conclusions. The present research focuses on systematically analyzing and categorizingmodels that detect review spam. Next, the study proceeds to assess them in terms of accuracy and results. We find that studies can be categorized into three groups that focus on methods to detect spam reviews, individual spammers and group spam. Different detection techniques have different strengths and weaknesses and thus favor different detection contexts. 2014 Published by Elsevier Ltd.", "title": "" }, { "docid": "neg:1840283_19", "text": "We assume that a high-dimensional datum, like an image, is a compositional expression of a set of properties, with a complicated non-linear relationship between the datum and its properties. This paper proposes a factorial mixture prior for capturing latent properties, thereby adding structured compositionality to deep generative models. The prior treats a latent vector as belonging to Cartesian product of subspaces, each of which is quantized separately with a Gaussian mixture model. Some mixture components can be set to represent properties as observed random variables whenever labeled properties are present. Through a combination of stochastic variational inference and gradient descent, a method for learning how to infer discrete properties in an unsupervised or semi-supervised way is outlined and empirically evaluated.", "title": "" } ]
1840284
Moving average reversion strategy for on-line portfolio selection
[ { "docid": "pos:1840284_0", "text": "Online portfolio selection is a fundamental problem in computational finance, which has been extensively studied across several research communities, including finance, statistics, artificial intelligence, machine learning, and data mining. This article aims to provide a comprehensive survey and a structural understanding of online portfolio selection techniques published in the literature. From an online machine learning perspective, we first formulate online portfolio selection as a sequential decision problem, and then we survey a variety of state-of-the-art approaches, which are grouped into several major categories, including benchmarks, Follow-the-Winner approaches, Follow-the-Loser approaches, Pattern-Matching--based approaches, and Meta-Learning Algorithms. In addition to the problem formulation and related algorithms, we also discuss the relationship of these algorithms with the capital growth theory so as to better understand the similarities and differences of their underlying trading ideas. This article aims to provide a timely and comprehensive survey for both machine learning and data mining researchers in academia and quantitative portfolio managers in the financial industry to help them understand the state of the art and facilitate their research and practical applications. We also discuss some open issues and evaluate some emerging new trends for future research.", "title": "" } ]
[ { "docid": "neg:1840284_0", "text": "Textual-based password authentication scheme tends to be more vulnerable to attacks such as shouldersurfing and hidden camera. To overcome the vulnerabilities of traditional methods, visual or graphical password schemes have been developed as possible alternative solutions to text-based password schemes. Because simply adopting graphical password authentication also has some drawbacks, schemes using graphic and text have been developed. In this paper, we propose a hybrid password authentication scheme based on shape and text. It uses shapes of strokes on the grid as the origin passwords and allows users to login with text passwords via traditional input devices. The method provides strong resistant to hidden-camera and shoulder-surfing. Moreover, the scheme has high scalability and flexibility to enhance the authentication process security. The analysis of the security level of this approach is also discussed.", "title": "" }, { "docid": "neg:1840284_1", "text": "Cloud computing has become the buzzword in the industry today. Though, it is not an entirely new concept but in today’s digital age, it has become ubiquitous due to the proliferation of Internet, broadband, mobile devices, better bandwidth and mobility requirements for end-users (be it consumers, SMEs or enterprises). In this paper, the focus is on the perceived inclination of micro and small businesses (SMEs or SMBs) toward cloud computing and the benefits reaped by them. This paper presents five factors nfrastructure-as-a-Service (IaaS) mall and medium enterprises (SMEs’) mall and medium businesses (SMBs’) influencing the cloud usage by this business community, whose needs and business requirements are very different from large enterprises. Firstly, ease of use and convenience is the biggest favorable factor followed by security and privacy and then comes the cost reduction. The fourth factor reliability is ignored as SMEs do not consider cloud as reliable. Lastly but not the least, SMEs do not want to use cloud for sharing and collaboration and prefer their old conventional methods for sharing and collaborating with their stakeholders.", "title": "" }, { "docid": "neg:1840284_2", "text": "It is well known that the convergence rate of the expectation-maximization (EM) algorithm can be faster than those of convention first-order iterative algorithms when the overlap in the given mixture is small. But this argument has not been mathematically proved yet. This article studies this problem asymptotically in the setting of gaussian mixtures under the theoretical framework of Xu and Jordan (1996). It has been proved that the asymptotic convergence rate of the EM algorithm for gaussian mixtures locally around the true solution is o(e0.5()), where > 0 is an arbitrarily small number, o(x) means that it is a higher-order infinitesimal as x 0, and e() is a measure of the average overlap of gaussians in the mixture. In other words, the large sample local convergence rate for the EM algorithm tends to be asymptotically superlinear when e() tends to zero.", "title": "" }, { "docid": "neg:1840284_3", "text": "AIMS AND OBJECTIVES\nThis integrative review of the literature addresses undergraduate nursing students' attitudes towards and use of research and evidence-based practice, and factors influencing this. Current use of research and evidence within practice, and the influences and perceptions of students in using these tools in the clinical setting are explored.\n\n\nBACKGROUND\nEvidence-based practice is an increasingly critical aspect of quality health care delivery, with nurses requiring skills in sourcing relevant information to guide the care they provide. Yet, barriers to engaging in evidence-based practice remain. To increase nurses' use of evidence-based practice within healthcare settings, the concepts and skills required must be introduced early in their career. To date, however, there is little evidence to show if and how this inclusion makes a difference.\n\n\nDESIGN\nIntegrative literature review.\n\n\nMETHODS\nProQuest, Summon, Science Direct, Ovid, CIAP, Google scholar and SAGE databases were searched, and Snowball search strategies used. One hundred and eighty-one articles were reviewed. Articles were then discarded for irrelevance. Nine articles discussed student attitudes and utilisation of research and evidence-based practice.\n\n\nRESULTS\nFactors surrounding the attitudes and use of research and evidence-based practice were identified, and included the students' capability beliefs, the students' attitudes, and the attitudes and support capabilities of wards/preceptors.\n\n\nCONCLUSIONS\nUndergraduate nursing students are generally positive toward using research for evidence-based practice, but experience a lack of support and opportunity. These students face cultural and attitudinal disadvantage, and lack confidence to practice independently. Further research and collaboration between educational facilities and clinical settings may improve utilisation.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nThis paper adds further discussion to the topic from the perspective of and including influences surrounding undergraduate students and new graduate nurses.", "title": "" }, { "docid": "neg:1840284_4", "text": "Computing containment relations between massive collections of sets is a fundamental operation in data management, for example in graph analytics and data mining applications. Motivated by recent hardware trends, in this paper we present two novel solutions for computing set-containment joins over massive sets: the Patricia Trie-based Signature Join (PTSJ) and PRETTI+, a Patricia trie enhanced extension of the state-of-the-art PRETTI join. The compact trie structure not only enables efficient use of main-memory, but also significantly boosts the performance of both approaches. By carefully analyzing the algorithms and conducting extensive experiments with various synthetic and real-world datasets, we show that, in many practical cases, our algorithms are an order of magnitude faster than the state-of-the-art.", "title": "" }, { "docid": "neg:1840284_5", "text": "It has been established that incorporating word cluster features derived from large unlabeled corpora can significantly improve prediction of linguistic structure. While previous work has focused primarily on English, we extend these results to other languages along two dimensions. First, we show that these results hold true for a number of languages across families. Second, and more interestingly, we provide an algorithm for inducing cross-lingual clusters and we show that features derived from these clusters significantly improve the accuracy of cross-lingual structure prediction. Specifically, we show that by augmenting direct-transfer systems with cross-lingual cluster features, the relative error of delexicalized dependency parsers, trained on English treebanks and transferred to foreign languages, can be reduced by up to 13%. When applying the same method to direct transfer of named-entity recognizers, we observe relative improvements of up to 26%.", "title": "" }, { "docid": "neg:1840284_6", "text": "We present a matrix factorization model inspired by challenges we encountered while working on the Xbox movies recommendation system. The item catalog in a recommender system is typically equipped with meta-data features in the form of labels. However, only part of these features are informative or useful with regard to collaborative filtering. By incorporating a novel sparsity prior on feature parameters, the model automatically discerns and utilizes informative features while simultaneously pruning non-informative features.\n The model is designed for binary feedback, which is common in many real-world systems where numeric rating data is scarce or non-existent. However, the overall framework is applicable to any likelihood function. Model parameters are estimated with a Variational Bayes inference algorithm, which is robust to over-fitting and does not require cross-validation and fine tuning of regularization coefficients. The efficacy of our method is illustrated on a sample from the Xbox movies dataset as well as on the publicly available MovieLens dataset. In both cases, the proposed solution provides superior predictive accuracy, especially for long-tail items. We then demonstrate the feature selection capabilities and compare against the common case of simple Gaussian priors. Finally, we show that even without features, our model performs better than a baseline model trained with the popular stochastic gradient descent approach.", "title": "" }, { "docid": "neg:1840284_7", "text": "A simple, fast method is presented for the interpolation of texture coordinates and shading parameters for polygons viewed in perspective. The method has application in scan conversion algorithms like z-bu er and painter's algorithms that perform screen space interpolation of shading parameters such as texture coordinates, colors, and normal vectors. Some previous methods perform linear interpolation in screen space, but this is rotationally variant, and in the case of texture mapping, causes a disturbing \\rubber sheet\" e ect. To correctly compute the nonlinear, projective transformation between screen space and parameter space, we use rational linear interpolation across the polygon, performing several divisions at each pixel. We present simpler formulas for setting up these interpolation computations, reducing the setup cost per polygon to nil and reducing the cost per vertex to a handful of divisions. Additional keywords: incremental, perspective, projective, a ne.", "title": "" }, { "docid": "neg:1840284_8", "text": "Cloud computing systems host most of today's commercial business applications yielding it high revenue which makes it a target of cyber attacks. This emphasizes the need for a digital forensic mechanism for the cloud environment. Conventional digital forensics cannot be directly presented as a cloud forensic solution due to the multi tenancy and virtualization of resources prevalent in cloud. While we do cloud forensics, the data to be inspected are cloud component logs, virtual machine disk images, volatile memory dumps, console logs and network captures. In this paper, we have come up with a remote evidence collection and pre-processing framework using Struts and Hadoop distributed file system. Collection of VM disk images, logs etc., are initiated through a pull model when triggered by the investigator, whereas cloud node periodically pushes network captures to HDFS. Pre-processing steps such as clustering and correlation of logs and VM disk images are carried out through Mahout and Weka to implement cross drive analysis.", "title": "" }, { "docid": "neg:1840284_9", "text": "In recent years, there has been a growing interest in the wireless sensor networks (WSN) for a variety of applications such as the localization and real time positioning. Different approaches based on artificial intelligence are applied to solve common issues in WSN and improve network performance. This paper addresses a survey on machine learning techniques for localization in WSNs using Received Signal Strength Indicator.", "title": "" }, { "docid": "neg:1840284_10", "text": "Driven by the challenges of rapid urbanization, cities are determined to implement advanced socio-technological changes and transform into smarter cities. The success of such transformation, however, greatly relies on a thorough understanding of the city's states of spatiotemporal flux. The ability to understand such fluctuations in context and in terms of interdependencies that exist among various entities across time and space is crucial, if cities are to maintain their smart growth. Here, we introduce a Smart City Digital Twin paradigm that can enable increased visibility into cities' human-infrastructure-technology interactions, in which spatiotemporal fluctuations of the city are integrated into an analytics platform at the real-time intersection of reality-virtuality. Through learning and exchange of spatiotemporal information with the city, enabled through virtualization and the connectivity offered by Internet of Things (IoT), this Digital Twin of the city becomes smarter over time, able to provide predictive insights into the city's smarter performance and growth.", "title": "" }, { "docid": "neg:1840284_11", "text": "How can prior knowledge on the transformation invariances of a domain be incorporated into the architecture of a neural network? We propose Equivariant Transformers (ETs), a family of differentiable image-to-image mappings that improve the robustness of models towards pre-defined continuous transformation groups. Through the use of specially-derived canonical coordinate systems, ETs incorporate functions that are equivariant by construction with respect to these transformations. We show empirically that ETs can be flexibly composed to improve model robustness towards more complicated transformation groups in several parameters. On a real-world image classification task, ETs improve the sample efficiency of ResNet classifiers, achieving relative improvements in error rate of up to 15% in the limited data regime while increasing model parameter count by less than 1%.", "title": "" }, { "docid": "neg:1840284_12", "text": "Multiplication is the basic building block for several DSP processors, Image processing and many other. Over the years the computational complexities of algorithms used in Digital Signal Processors (DSPs) have gradually increased. This requires a parallel array multiplier to achieve high execution speed or to meet the performance demands. A typical implementation of such an array multiplier is Braun design. Braun multiplier is a type of parallel array multiplier. The architecture of Braun multiplier mainly consists of some Carry Save Adders, array of AND gates and one Ripple Carry Adder. In this research work, a new design of Braun Multiplier is proposed and this proposed design of multiplier uses a very fast parallel prefix adder ( Kogge Stone Adder) in place of Ripple Carry Adder. The architecture of standard Braun Multiplier is modified in this work for reducing the delay due to Ripple Carry Adder and performing faster multiplication of two binary numbers. This research also presents a comparative study of FPGA implementation on Spartan2 and Spartartan2E for new multiplier design and standard braun multiplier. The RTL design of proposed new Braun Multiplier and standard braun multiplier is done using Verilog HDL. The simulation is performed using ModelSim. The Xilinx ISE design tool is used for FPGA implementation. Comparative result shows the modified design is effective when compared in terms of delay with the standard design.", "title": "" }, { "docid": "neg:1840284_13", "text": "Our research is aimed at developing a quantitative approach for assessing supply chain resilience to disasters, a topic that has been discussed primarily in a qualitative manner in the literature. For this purpose, we propose a simulation-based framework that incorporates concepts of resilience into the process of supply chain design. In this context, resilience is defined as the ability of a supply chain system to reduce the probabilities of disruptions, to reduce the consequences of those disruptions, and to reduce the time to recover normal performance. The decision framework incorporates three determinants of supply chain resilience (density, complexity, and node criticality) and discusses their relationship to the occurrence of disruptions, to the impacts of those disruptions on the performance of a supply chain system and to the time needed for recovery. Different preliminary strategies for evaluating supply chain resilience to disasters are identified, and directions for future research are discussed.", "title": "" }, { "docid": "neg:1840284_14", "text": "Electro-rheological (ER) fluids are smart fluids which can transform into solid-like phase by applying an electric field. This process is reversible and can be strategically used to build fluidic components for innovative soft robots capable of soft locomotion. In this work, we show the potential applications of ER fluids to build valves that simplify design of fluidic based soft robots. We propose the design and development of a composite ER valve, aimed at controlling the flexibility of soft robots bodies by controlling the ER fluid flow. We present how an ad hoc number of such soft components can be embodied in a simple crawling soft robot (Wormbot); in a locomotion mechanism capable of forward motion through rotation; and, in a tendon driven continuum arm. All these embodiments show how simplification of the hydraulic circuits relies on the simple structure of ER valves. Finally, we address preliminary experiments to characterize the behavior of Wormbot in terms of actuation forces.", "title": "" }, { "docid": "neg:1840284_15", "text": "Objective\nTo conduct a systematic review of deep learning models for electronic health record (EHR) data, and illustrate various deep learning architectures for analyzing different data sources and their target applications. We also highlight ongoing research and identify open challenges in building deep learning models of EHRs.\n\n\nDesign/method\nWe searched PubMed and Google Scholar for papers on deep learning studies using EHR data published between January 1, 2010, and January 31, 2018. We summarize them according to these axes: types of analytics tasks, types of deep learning model architectures, special challenges arising from health data and tasks and their potential solutions, as well as evaluation strategies.\n\n\nResults\nWe surveyed and analyzed multiple aspects of the 98 articles we found and identified the following analytics tasks: disease detection/classification, sequential prediction of clinical events, concept embedding, data augmentation, and EHR data privacy. We then studied how deep architectures were applied to these tasks. We also discussed some special challenges arising from modeling EHR data and reviewed a few popular approaches. Finally, we summarized how performance evaluations were conducted for each task.\n\n\nDiscussion\nDespite the early success in using deep learning for health analytics applications, there still exist a number of issues to be addressed. We discuss them in detail including data and label availability, the interpretability and transparency of the model, and ease of deployment.", "title": "" }, { "docid": "neg:1840284_16", "text": "The importance of organizational agility in a competitive environment is nowadays widely recognized and accepted. However, despite this awareness, the availability of tools and methods that support an organization in assessing and improving their organizational agility is scarce. Therefore, this study introduces the Organizational Agility Maturity Model in order to provide an easy-to-use yet powerful assessment tool for organizations in the software and IT service industry. Based on a design science research approach with a comprehensive literature review and an empirical investigation utilizing factor analysis, both scientific rigor as well as practical relevance is ensured. The applicability is further demonstrated by a cluster analysis identifying patterns of organizational agility that fit to the maturity model. The Organizational Agility Maturity Model further contributes to the field by providing a theoretically and empirically grounded structure of organizational agility supporting the efforts of developing a common understanding of the concept.", "title": "" }, { "docid": "neg:1840284_17", "text": "In this paper, we propose a framework for formation stabilization of multiple autonomous vehicles in a distributed fashion. Each vehicle is assumed to have simple dynamics, i.e. a double-integrator, with a directed (or an undirected) information flow over the formation graph of the vehicles. Our goal is to find a distributed control law (with an efficient computational cost) for each vehicle that makes use of limited information regarding the state of other vehicles. Here, the key idea in formation stabilization is the use of natural potential functions obtained from structural constraints of a desired formation in a way that leads to a collision-free, distributed, and bounded state feedback law for each vehicle.", "title": "" }, { "docid": "neg:1840284_18", "text": "The on-line or automatic visual inspection of PCB is basically a very first examination before its electronic testing. This inspection consists of mainly missing or wrongly placed components in the PCB. If there is any missing electronic component then it is not so damaging the PCB. But if any of the component that can be placed only in one way and has been soldered in other way around, then the same will be damaged and there are chances that other components may also get damaged. To avoid this, an automatic visual inspection is in demand that may take care of the missing or wrongly placed electronic components. In the presented paper work, an automatic machine vision system for inspection of PCBs for any missing component as compared with the standard one has been proposed. The system primarily consists of two parts: 1) the learning process, where the system is trained for the standard PCB, and 2) inspection process where the PCB under test is inspected for any missing component as compared with the standard one. The proposed system can be deployed on a manufacturing line with a much more affordable price comparing to other commercial inspection systems.", "title": "" } ]
1840285
Stacked convolutional auto-encoders for steganalysis of digital images
[ { "docid": "pos:1840285_0", "text": "Today, the most accurate steganalysis methods for digital media are built as supervised classifiers on feature vectors extracted from the media. The tool of choice for the machine learning seems to be the support vector machine (SVM). In this paper, we propose an alternative and well-known machine learning tool-ensemble classifiers implemented as random forests-and argue that they are ideally suited for steganalysis. Ensemble classifiers scale much more favorably w.r.t. the number of training examples and the feature dimensionality with performance comparable to the much more complex SVMs. The significantly lower training complexity opens up the possibility for the steganalyst to work with rich (high-dimensional) cover models and train on larger training sets-two key elements that appear necessary to reliably detect modern steganographic algorithms. Ensemble classification is portrayed here as a powerful developer tool that allows fast construction of steganography detectors with markedly improved detection accuracy across a wide range of embedding methods. The power of the proposed framework is demonstrated on three steganographic methods that hide messages in JPEG images.", "title": "" }, { "docid": "pos:1840285_1", "text": "Recent findings [HOT06] have made possible the learning of deep layered hierarchical representations of data mimicking the brains working. It is hoped that this paradigm will unlock some of the power of the brain and lead to advances towards true AI. In this thesis I implement and evaluate state-of-the-art deep learning models and using these as building blocks I investigate the hypothesis that predicting the time-to-time sensory input is a good learning objective. I introduce the Predictive Encoder (PE) and show that a simple non-regularized learning rule, minimizing prediction error on natural video patches leads to receptive fields similar to those found in Macaque monkey visual area V1. I scale this model to video of natural scenes by introducing the Convolutional Predictive Encoder (CPE) and show similar results. Both models can be used in deep architectures as a deep learning module.", "title": "" } ]
[ { "docid": "neg:1840285_0", "text": "A total of eight appendices (Appendix 1 through Appendix 8) and an associated reference for these appendices have been placed here. In addition, there is currently a search engine located at to assist users in identifying BPR techniques and tools.", "title": "" }, { "docid": "neg:1840285_1", "text": "We consider the hypothesis testing problem of detecting a shift between the means of two multivariate normal distributions in the high-dimensional setting, allowing for the data dimension p to exceed the sample size n. Our contribution is a new test statistic for the two-sample test of means that integrates a random projection with the classical Hotelling T 2 statistic. Working within a high-dimensional framework that allows (p, n) → ∞, we first derive an asymptotic power function for our test, and then provide sufficient conditions for it to achieve greater power than other state-of-the-art tests. Using ROC curves generated from simulated data, we demonstrate superior performance against competing tests in the parameter regimes anticipated by our theoretical results. Lastly, we illustrate an advantage of our procedure with comparisons on a high-dimensional gene expression dataset involving the discrimination of different types of cancer.", "title": "" }, { "docid": "neg:1840285_2", "text": "This study investigates the influence of online news and clickbait headlines on online users’ emotional arousal and behavior. An experiment was conducted to examine the level of arousal in three online news headline groups—news headlines, clickbait headlines, and control headlines. Arousal was measured by two different measurement approaches—pupillary response recorded by an eye-tracking device and selfassessment manikin (SAM) reported in a survey. Overall, the findings suggest that certain clickbait headlines can evoke users’ arousal which subsequently drives intention to read news stories. Arousal scores assessed by the pupillary response and SAM are consistent when the level of emotional arousal is high.", "title": "" }, { "docid": "neg:1840285_3", "text": "We present a 3D face reconstruction system that takes as input either one single view or several different views. Given a facial image, we first classify the facial pose into one of five predefined poses, then detect two anchor points that are then used to detect a set of predefined facial landmarks. Based on these initial steps, for a single view we apply a warping process using a generic 3D face model to build a 3D face. For multiple views, we apply sparse bundle adjustment to reconstruct 3D landmarks which are used to deform the generic 3D face model. Experimental results on the Color FERET and CMU multi-PIE databases confirm our framework is effective in creating realistic 3D face models that can be used in many computer vision applications, such as 3D face recognition at a distance.", "title": "" }, { "docid": "neg:1840285_4", "text": "This paper proposes a self-adaption Kalman observer (SAKO) used in a permanent-magnet synchronous motor (PMSM) servo system. The proposed SAKO can make up measurement noise of the absolute encoder with limited resolution ratio and avoid differentiating process and filter delay of the traditional speed measuring methods. To be different from the traditional Kalman observer, the proposed observer updates the gain matrix by calculating the measurement noise at the current time. The variable gain matrix is used to estimate and correct the observed position, speed, and load torque to solve the problem that the motor speed calculated by the traditional methods is prone to large speed error and time delay when PMSM runs at low speeds. The state variables observed by the proposed observer are used as the speed feedback signals and compensation signal of the load torque disturbance in PMSM servo system. The simulations and experiments prove that the SAKO can observe speed and load torque precisely and timely and that the feedforward and feedback control system of PMSM can improve the speed tracking ability.", "title": "" }, { "docid": "neg:1840285_5", "text": "Name ambiguity stems from the fact that many people or objects share identical names in the real world. Such name ambiguity decreases the performance of document retrieval, Web search, information integration, and may cause confusion in other applications. Due to the same name spellings and lack of information, it is a nontrivial task to distinguish them accurately. In this article, we focus on investigating the problem in digital libraries to distinguish publications written by authors with identical names. We present an effective framework named GHOST (abbreviation for GrapHical framewOrk for name diSambiguaTion), to solve the problem systematically. We devise a novel similarity metric, and utilize only one type of attribute (i.e., coauthorship) in GHOST. Given the similarity matrix, intermediate results are grouped into clusters with a recently introduced powerful clustering algorithm called Affinity Propagation. In addition, as a complementary technique, user feedback can be used to enhance the performance. We evaluated the framework on the real DBLP and PubMed datasets, and the experimental results show that GHOST can achieve both high precision and recall.", "title": "" }, { "docid": "neg:1840285_6", "text": "Many deals that look good on paper never materialize into value-creating endeavors. Often, the problem begins at the negotiating table. In fact, the very person everyone thinks is pivotal to a deal's success--the negotiator--is often the one who undermines it. That's because most negotiators have a deal maker mind-set: They see the signed contract as the final destination rather than the start of a cooperative venture. What's worse, most companies reward negotiators on the basis of the number and size of the deals they're signing, giving them no incentive to change. The author asserts that organizations and negotiators must transition from a deal maker mentality--which involves squeezing your counterpart for everything you can get--to an implementation mind-set--which sets the stage for a healthy working relationship long after the ink has dried. Achieving an implementation mind-set demands five new approaches. First, start with the end in mind: Negotiation teams should carry out a \"benefit of hindsight\" exercise to imagine what sorts of problems they'll have encountered 12 months down the road. Second, help your counterpart prepare. Surprise confers advantage only because the other side has no time to think through all the implications of a proposal. If they agree to something they can't deliver, it will affect you both. Third, treat alignment as a shared responsibility. After all, if the other side's interests aren't aligned, it's your problem, too. Fourth, send one unified message. Negotiators should brief implementation teams on both sides together so everyone has the same information. And fifth, manage the negotiation like a business exercise: Combine disciplined negotiation preparation with post-negotiation reviews. Above all, companies must remember that the best deals don't end at the negotiating table--they begin there.", "title": "" }, { "docid": "neg:1840285_7", "text": "This paper presents a survey of topological spatial logics, taking as its point of departure the interpretation of the modal logic S4 due to McKinsey and Tarski. We consider the effect of extending this logic with the means to represent topological connectedness, focusing principally on the issue of computational complexity. In particular, we draw attention to the special problems which arise when the logics are interpreted not over arbitrary topological spaces, but over (low-dimensional) Euclidean spaces.", "title": "" }, { "docid": "neg:1840285_8", "text": "legend N2D N1D 2LPEG N2D vs. 2LPEG N1D vs. 2LPEG EFFICACY Primary analysis set, n1⁄4 275 Primary analysis set, n1⁄4 275 Primary analysis set, n1⁄4 272 Primary endpoint: Patients with successful overall bowel cleansing efficacy (HCS) [n] 253 (92.0%) 245 (89.1%) 238 (87.5%) -4.00%* [0.055] -6.91%* [0.328] Supportive secondary endpoint: Patients with successful overall bowel cleansing efficacy (BBPS) [n] 249 (90.5%) 243 (88.4%) 232 (85.3%) n.a. n.a. Primary endpoint: Excellent plus Good cleansing rate in colon ascendens (primary analysis set) [n] 87 (31.6%) 93 (33.8%) 41 (15.1%) 8.11%* [50.001] 10.32%* [50.001] Key secondary endpoint: Adenoma detection rate, colon ascendens 11.6% 11.6% 8.1% -4.80%; 12.00%** [0.106] -4.80%; 12.00%** [0.106] Key secondary endpoint: Adenoma detection rate, overall colon 26.6% 27.6% 26.8% -8.47%; 8.02%** [0.569] -7.65%; 9.11%** [0.455] Key secondary endpoint: Polyp detection rate, colon ascendens 23.3% 18.6% 16.2% -1.41%; 15.47%** [0.024] -6.12%; 10.82%** [0.268] Key secondary endpoint: Polyp detection rate, overall colon 44.0% 45.1% 44.5% -8.85%; 8.00%** [0.579] –7.78%; 9.09%** [0.478] Compliance rates (min 75% of both doses taken) [n] 235 (85.5%) 233 (84.7%) 245 (90.1%) n.a. n.a. SAFETY Safety set, n1⁄4 262 Safety set, n1⁄4 269 Safety set, n1⁄4 263 All treatment-emergent adverse events [n] 77 89 53 n.a. n.a. Patients with any related treatment-emergent adverse event [n] 30 (11.5%) 40 (14.9%) 20 (7.6%) n.a. n.a. *1⁄4 97.5% 1-sided CI; **1⁄4 95% 2-sided CI; n.a.1⁄4 not applicable. United European Gastroenterology Journal 4(5S) A219", "title": "" }, { "docid": "neg:1840285_9", "text": "Adversarial examples are intentionally crafted data with the purpose of deceiving neural networks into misclassification. When we talk about strategies to create such examples, we usually refer to perturbation-based methods that fabricate adversarial examples by applying invisible perturbations onto normal data. The resulting data reserve their visual appearance to human observers, yet can be totally unrecognizable to DNN models, which in turn leads to completely misleading predictions. In this paper, however, we consider crafting adversarial examples from existing data as a limitation to example diversity. We propose a non-perturbationbased framework that generates native adversarial examples from class-conditional generative adversarial networks. As such, the generated data will not resemble any existing data and thus expand example diversity, raising the difficulty in adversarial defense. We then extend this framework to pre-trained conditional GANs, in which we turn an existing generator into an \"adversarial-example generator\". We conduct experiments on our approach for MNIST and CIFAR10 datasets and have satisfactory results, showing that this approach can be a potential alternative to previous attack strategies.", "title": "" }, { "docid": "neg:1840285_10", "text": "Finite element method (FEM) is a powerful tool in analysis of electrical machines however, the computational cost is high depending on the geometry of analyzed machine. In synchronous reluctance machines (SyRM) with transversally laminated rotors, the anisotropy of magnetic circuit is provided by flux barriers which can be of various shapes. Flux barriers of shape based on Zhukovski's curves seem to provide very good electromagnetic properties of the machine. Complex geometry requires a fine mesh which increases computational cost when performing finite element analysis. By using magnetic equivalent circuit (MEC) it is possible to obtain good accuracy at low cost. This paper presents magnetic equivalent circuit of SyRM with new type of flux barriers. Numerical calculation of flux barriers' reluctances will be also presented.", "title": "" }, { "docid": "neg:1840285_11", "text": "P eople with higher cognitive ability (or “IQ”) differ from those with lower cognitive ability in a variety of important and unimportant ways. On average, they live longer, earn more, have larger working memories, faster reaction times and are more susceptible to visual illusions (Jensen, 1998). Despite the diversity of phenomena related to IQ, few have attempted to understand—or even describe—its influences on judgment and decision making. Studies on time preference, risk preference, probability weighting, ambiguity aversion, endowment effects, anchoring and other widely researched topics rarely make any reference to the possible effects of cognitive abilities (or cognitive traits). Decision researchers may neglect cognitive ability because they are more interested in the average effect of some experimental manipulation. On this view, individual differences (in intelligence or anything else) are regarded as a nuisance—as just another source of “unexplained” variance. Second, most studies are conducted on college undergraduates, who are widely perceived as fairly homogenous. Third, characterizing performance differences on cognitive tasks requires terms (“IQ” and “aptitudes” and such) that many object to because of their association with discriminatory policies. In short, researchers may be reluctant to study something they do not find interesting, that is not perceived to vary much within the subject pool conveniently obtained, and that will just get them into trouble anyway. But as Lubinski and Humphreys (1997) note, a neglected aspect does not cease to operate because it is neglected, and there is no good reason for ignoring the possibility that general intelligence or various more specific cognitive abilities are important causal determinants of decision making. To provoke interest in this", "title": "" }, { "docid": "neg:1840285_12", "text": "Within the past few years, organizations in diverse industries have adopted MapReduce-based systems for large-scale data processing. Along with these new users, important new workloads have emerged which feature many small, short, and increasingly interactive jobs in addition to the large, long-running batch jobs for which MapReduce was originally designed. As interactive, large-scale query processing is a strength of the RDBMS community, it is important that lessons from that field be carried over and applied where possible in this new domain. However, these new workloads have not yet been described in the literature. We fill this gap with an empirical analysis of MapReduce traces from six separate business-critical deployments inside Facebook and at Cloudera customers in e-commerce, telecommunications, media, and retail. Our key contribution is a characterization of new MapReduce workloads which are driven in part by interactive analysis, and which make heavy use of querylike programming frameworks on top of MapReduce. These workloads display diverse behaviors which invalidate prior assumptions about MapReduce such as uniform data access, regular diurnal patterns, and prevalence of large jobs. A secondary contribution is a first step towards creating a TPC-like data processing benchmark for MapReduce.", "title": "" }, { "docid": "neg:1840285_13", "text": "Generative Adversarial Networks (GANs) are shown to be successful at generating new and realistic samples including 3D object models. Conditional GAN, a variant of GANs, allows generating samples in given conditions. However, objects generated for each condition are different and it does not allow generation of the same object in different conditions. In this paper, we first adapt conditional GAN, which is originally designed for 2D image generation, to the problem of generating 3D models in different rotations. We then propose a new approach to guide the network to generate the same 3D sample in different and controllable rotation angles (sample pairs). Unlike previous studies, the proposed method does not require modification of the standard conditional GAN architecture and it can be integrated into the training step of any conditional GAN. Experimental results and visual comparison of 3D models show that the proposed method is successful at generating model pairs in different conditions.", "title": "" }, { "docid": "neg:1840285_14", "text": "With more than 340~million messages that are posted on Twitter every day, the amount of duplicate content as well as the demand for appropriate duplicate detection mechanisms is increasing tremendously. Yet there exists little research that aims at detecting near-duplicate content on microblogging platforms. We investigate the problem of near-duplicate detection on Twitter and introduce a framework that analyzes the tweets by comparing (i) syntactical characteristics, (ii) semantic similarity, and (iii) contextual information. Our framework provides different duplicate detection strategies that, among others, make use of external Web resources which are referenced from microposts. Machine learning is exploited in order to learn patterns that help identifying duplicate content. We put our duplicate detection framework into practice by integrating it into Twinder, a search engine for Twitter streams. An in-depth analysis shows that it allows Twinder to diversify search results and improve the quality of Twitter search. We conduct extensive experiments in which we (1) evaluate the quality of different strategies for detecting duplicates, (2) analyze the impact of various features on duplicate detection, (3) investigate the quality of strategies that classify to what exact level two microposts can be considered as duplicates and (4) optimize the process of identifying duplicate content on Twitter. Our results prove that semantic features which are extracted by our framework can boost the performance of detecting duplicates.", "title": "" }, { "docid": "neg:1840285_15", "text": "It is proposed to use weighted least-norm solution to avoid joint limits for redundant joint manipulators. A comparison is made with the gradient projection method for avoiding joint limits. While the gradient projection method provides the optimal direction for the joint velocity vector within the null space, its magnitude is not unique and is adjusted by a scalar coefficient chosen by trial and error. It is shown in this paper that one fixed value of the scalar coefficient is not suitable even in a small workspace. The proposed manipulation scheme automatically chooses an appropriate magnitude of the self-motion throughout the workspace. This scheme, unlike the gradient projection method, guarantees joint limit avoidance, and also minimizes unnecessary self-motion. It was implemented and tested for real-time control of a seven-degree-offreedom (7-DOF) Robotics Research Corporation (RRC) manipulator.", "title": "" }, { "docid": "neg:1840285_16", "text": "The stability of two-dimensional, linear, discrete systems is examined using the 2-D matrix Lyapunov equation. While the existence of a positive definite solution pair to the 2-D Lyapunov equation is sufficient for stability, the paper proves that such existence is not necessary for stability, disproving a long-standing conjecture.", "title": "" }, { "docid": "neg:1840285_17", "text": "This study investigated the role of self-directed learning (SDL) in problem-based learning (PBL) and examined how SDL relates to self-regulated learning (SRL). First, it is explained how SDL is implemented in PBL environments. Similarities between SDL and SRL are highlighted. However, both concepts differ on important aspects. SDL includes an additional premise of giving students a broader role in the selection and evaluation of learning materials. SDL can encompass SRL, but the opposite does not hold. Further, a review of empirical studies on SDL and SRL in PBL was conducted. Results suggested that SDL and SRL are developmental processes, that the “self” aspect is crucial, and that PBL can foster SDL. It is concluded that conceptual clarity of what SDL entails and guidance for both teachers and students can help PBL to bring forth self-directed learners.", "title": "" }, { "docid": "neg:1840285_18", "text": "Theory of Mind (ToM) is the ability to attribute thoughts, intentions and beliefs to others. This involves component processes, including cognitive perspective taking (cognitive ToM) and understanding emotions (affective ToM). This study assessed the distinction and overlap of neural processes involved in these respective components, and also investigated their development between adolescence and adulthood. While data suggest that ToM develops between adolescence and adulthood, these populations have not been compared on cognitive and affective ToM domains. Using fMRI with 15 adolescent (aged 11-16 years) and 15 adult (aged 24-40 years) males, we assessed neural responses during cartoon vignettes requiring cognitive ToM, affective ToM or physical causality comprehension (control). An additional aim was to explore relationships between fMRI data and self-reported empathy. Both cognitive and affective ToM conditions were associated with neural responses in the classic ToM network across both groups, although only affective ToM recruited medial/ventromedial PFC (mPFC/vmPFC). Adolescents additionally activated vmPFC more than did adults during affective ToM. The specificity of the mPFC/vmPFC response during affective ToM supports evidence from lesion studies suggesting that vmPFC may integrate affective information during ToM. Furthermore, the differential neural response in vmPFC between adult and adolescent groups indicates developmental changes in affective ToM processing.", "title": "" } ]
1840286
Disease Prediction from Electronic Health Records Using Generative Adversarial Networks
[ { "docid": "pos:1840286_0", "text": "Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name \"deep patient\". We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.", "title": "" }, { "docid": "pos:1840286_1", "text": "Class imbalance is one of the challenges of machine learning and data mining fields. Imbalance data sets degrades the performance of data mining and machine learning techniques as the overall accuracy and decision making be biased to the majority class, which lead to misclassifying the minority class samples or furthermore treated them as noise. This paper proposes a general survey for class imbalance problem solutions and the most significant investigations recently introduced by researchers.", "title": "" } ]
[ { "docid": "neg:1840286_0", "text": "Sleep bruxism (SB) is reported by 8% of the adult population and is mainly associated with rhythmic masticatory muscle activity (RMMA) characterized by repetitive jaw muscle contractions (3 bursts or more at a frequency of 1 Hz). The consequences of SB may include tooth destruction, jaw pain, headaches, or the limitation of mandibular movement, as well as tooth-grinding sounds that disrupt the sleep of bed partners. SB is probably an extreme manifestation of a masticatory muscle activity occurring during the sleep of most normal subjects, since RMMA is observed in 60% of normal sleepers in the absence of grinding sounds. The pathophysiology of SB is becoming clearer, and there is an abundance of evidence outlining the neurophysiology and neurochemistry of rhythmic jaw movements (RJM) in relation to chewing, swallowing, and breathing. The sleep literature provides much evidence describing the mechanisms involved in the reduction of muscle tone, from sleep onset to the atonia that characterizes rapid eye movement (REM) sleep. Several brainstem structures (e.g., reticular pontis oralis, pontis caudalis, parvocellularis) and neurochemicals (e.g., serotonin, dopamine, gamma aminobutyric acid [GABA], noradrenaline) are involved in both the genesis of RJM and the modulation of muscle tone during sleep. It remains unknown why a high percentage of normal subjects present RMMA during sleep and why this activity is three times more frequent and higher in amplitude in SB patients. It is also unclear why RMMA during sleep is characterized by co-activation of both jaw-opening and jaw-closing muscles instead of the alternating jaw-opening and jaw-closing muscle activity pattern typical of chewing. The final section of this review proposes that RMMA during sleep has a role in lubricating the upper alimentary tract and increasing airway patency. The review concludes with an outline of questions for future research.", "title": "" }, { "docid": "neg:1840286_1", "text": "Advanced Persistent Threats (APTs) are a new breed of internet based smart threats, which can go undetected with the existing state of-the-art internet traffic monitoring and protection systems. With the evolution of internet and cloud computing, a new generation of smart APT attacks has also evolved and signature based threat detection systems are proving to be futile and insufficient. One of the essential strategies in detecting APTs is to continuously monitor and analyze various features of a TCP/IP connection, such as the number of transferred packets, the total count of the bytes exchanged, the duration of the TCP/IP connections, and details of the number of packet flows. The current threat detection approaches make extensive use of machine learning algorithms that utilize statistical and behavioral knowledge of the traffic. However, the performance of these algorithms is far from satisfactory in terms of reducing false negatives and false positives simultaneously. Mostly, current algorithms focus on reducing false positives, only. This paper presents a fractal based anomaly classification mechanism, with the goal of reducing both false positives and false negatives, simultaneously. A comparison of the proposed fractal based method with a traditional Euclidean based machine learning algorithm (k-NN) shows that the proposed method significantly outperforms the traditional approach by reducing false positive and false negative rates, simultaneously, while improving the overall classification rates.", "title": "" }, { "docid": "neg:1840286_2", "text": "Media images of the female body commonly represent reigning appearance ideals of the era in which they are published. To date, limited documentation of the genital appearance ideals in mainstream media exists. Analysis 1 sought to describe genital appearance ideals (i.e., mons pubis and labia majora visibility, labia minora size and color, and pubic hair style) and general physique ideals (i.e., hip, waist, and bust size, height, weight, and body mass index [BMI]) across time based on 647 Playboy Magazine centerfolds published between 1953 and 2007. Analysis 2 focused exclusively on the genital appearance ideals embodied by models in 185 Playboy photographs published between 2007 and 2008. Taken together, results suggest the perpetuation of a \"Barbie Doll\" ideal characterized by a low BMI, narrow hips, a prominent bust, and hairless, undefined genitalia resembling those of a prepubescent female.", "title": "" }, { "docid": "neg:1840286_3", "text": "Automatic Number Plate Recognition (ANPR) is a real time embedded system which automatically recognizes the license number of vehicles. In this paper, the task of recognizing number plate for Indian conditions is considered, where number plate standards are rarely followed.", "title": "" }, { "docid": "neg:1840286_4", "text": "The ability to record and replay program execution helps significantly in debugging non-deterministic MPI applications by reproducing message-receive orders. However, the large amount of data that traditional record-and-reply techniques record precludes its practical applicability to massively parallel applications. In this paper, we propose a new compression algorithm, Clock Delta Compression (CDC), for scalable record and replay of non-deterministic MPI applications. CDC defines a reference order of message receives based on a totally ordered relation using Lamport clocks, and only records the differences between this reference logical-clock order and an observed order. Our evaluation shows that CDC significantly reduces the record data size. For example, when we apply CDC to Monte Carlo particle transport Benchmark (MCB), which represents common non-deterministic communication patterns, CDC reduces the record size by approximately two orders of magnitude compared to traditional techniques and incurs between 13.1% and 25.5% of runtime overhead.", "title": "" }, { "docid": "neg:1840286_5", "text": "Prior research has provided valuable insights into how and why employees make a decision about the adoption and use of information technologies (ITs) in the workplace. From an organizational point of view, however, the more important issue is how managers make informed decisions about interventions that can lead to greater acceptance and effective utilization of IT. There is limited research in the IT implementation literature that deals with the role of interventions to aid such managerial decision making. Particularly, there is a need to understand how various interventions can influence the known determinants of IT adoption and use. To address this gap in the literature, we draw from the vast body of research on the technology acceptance model (TAM), particularly the work on the determinants of perceived usefulness and perceived ease of use, and: (i) develop a comprehensive nomological network (integrated model) of the determinants of individual level (IT) adoption and use; (ii) empirically test the proposed integrated model; and (iii) present a research agenda focused on potential preand postimplementation interventions that can enhance employees’ adoption and use of IT. Our findings and research agenda have important implications for managerial decision making on IT implementation in organizations. Subject Areas: Design Characteristics, Interventions, Management Support, Organizational Support, Peer Support, Technology Acceptance Model (TAM), Technology Adoption, Training, User Acceptance, User Involvement, and User Participation.", "title": "" }, { "docid": "neg:1840286_6", "text": "This paper presents the development and test of a flexible control strategy for an 11-kW wind turbine with a back-to-back power converter capable of working in both stand-alone and grid-connection mode. The stand-alone control is featured with a complex output voltage controller capable of handling nonlinear load and excess or deficit of generated power. Grid-connection mode with current control is also enabled for the case of isolated local grid involving other dispersed power generators such as other wind turbines or diesel generators. A novel automatic mode switch method based on a phase-locked loop controller is developed in order to detect the grid failure or recovery and switch the operation mode accordingly. A flexible digital signal processor (DSP) system that allows user-friendly code development and online tuning is used to implement and test the different control strategies. The back-to-back power conversion configuration is chosen where the generator converter uses a built-in standard flux vector control to control the speed of the turbine shaft while the grid-side converter uses a standard pulse-width modulation active rectifier control strategy implemented in a DSP controller. The design of the longitudinal conversion loss filter and of the involved PI-controllers are described in detail. Test results show the proposed methods works properly.", "title": "" }, { "docid": "neg:1840286_7", "text": "We present a palette-based framework for color composition for visual applications. Color composition is a critical aspect of visual applications in art, design, and visualization. The color wheel is often used to explain pleasing color combinations in geometric terms, and, in digital design, to provide a user interface to visualize and manipulate colors. We abstract relationships between palette colors as a compact set of axes describing harmonic templates over perceptually uniform color wheels. Our framework provides a basis for a variety of color-aware image operations, such as color harmonization and color transfer, and can be applied to videos. To enable our approach, we introduce an extremely scalable and efficient yet simple palette-based image decomposition algorithm. Our approach is based on the geometry of images in RGBXY-space. This new geometric approach is orders of magnitude more efficient than previous work and requires no numerical optimization. We demonstrate a real-time layer decomposition tool. After preprocessing, our algorithm can decompose 6 MP images into layers in 20 milliseconds. We also conducted three large-scale, wide-ranging perceptual studies on the perception of harmonic colors and harmonization algorithms.", "title": "" }, { "docid": "neg:1840286_8", "text": "Topic models, such as latent Dirichlet allocation (LDA), can be useful tools for the statistical analysis of document collections and other discrete data. The LDA model assumes that the words of each document arise from a mixture of topics, each of which is a distribution over the vocabulary. A limitation of LDA is the inability to model topic correlation even though, for example, a document about genetics is more likely to also be about disease than X-ray astronomy. This limitation stems from the use of the Dirichlet distribution to model the variability among the topic proportions. In this paper we develop the correlated topic model (CTM), where the topic proportions exhibit correlation via the logistic normal distribution [J. Roy. Statist. Soc. Ser. B 44 (1982) 139–177]. We derive a fast variational inference algorithm for approximate posterior inference in this model, which is complicated by the fact that the logistic normal is not conjugate to the multinomial. We apply the CTM to the articles from Science published from 1990–1999, a data set that comprises 57M words. The CTM gives a better fit of the data than LDA, and we demonstrate its use as an exploratory tool of large document collections.", "title": "" }, { "docid": "neg:1840286_9", "text": "An economy based on the exchange of capital, assets and services between individuals has grown significantly, spurred by proliferation of internet-based platforms that allow people to share underutilized resources and trade with reasonably low transaction costs. The movement toward this economy of “sharing” translates into market efficiencies that bear new products, reframe established services, have positive environmental effects, and may generate overall economic growth. This emerging paradigm, entitled the collaborative economy, is disruptive to the conventional company-driven economic paradigm as evidenced by the large number of peer-to-peer based services that have captured impressive market shares sectors ranging from transportation and hospitality to banking and risk capital. The panel explores economic, social, and technological implications of the collaborative economy, how digital technologies enable it, and how the massive sociotechnical systems embodied in these new peer platforms may evolve in response to the market and social forces that drive this emerging ecosystem.", "title": "" }, { "docid": "neg:1840286_10", "text": "Vehicular Ad hoc Networks (VANETs) are the promising approach to provide safety and other applications to the drivers as well as passengers. It becomes a key component of the intelligent transport system. A lot of works have been done towards it but security in VANET got less attention. In this article, we have discussed about the VANET and its technical and security challenges. We have also discussed some major attacks and solutions that can be implemented against these attacks. We have compared the solution using different parameters. Lastly we have discussed the mechanisms that are used in the solutions.", "title": "" }, { "docid": "neg:1840286_11", "text": "Decades of research suggest that similarity in demographics, values, activities, and attitudes predicts higher marital satisfaction. The present study examined the relationship between similarity in Big Five personality factors and initial levels and 12-year trajectories of marital satisfaction in long-term couples, who were in their 40s and 60s at the beginning of the study. Across the entire sample, greater overall personality similarity predicted more negative slopes in marital satisfaction trajectories. In addition, spousal similarity on Conscientiousness and Extraversion more strongly predicted negative marital satisfaction outcomes among the midlife sample than among the older sample. Results are discussed in terms of the different life tasks faced by young, midlife, and older adults, and the implications of these tasks for the \"ingredients\" of marital satisfaction.", "title": "" }, { "docid": "neg:1840286_12", "text": "Issue tracking systems store valuable data for testing hypotheses concerning maintenance, building statistical prediction models and recently investigating developers \"affectiveness\". In particular, the Jira Issue Tracking System is a proprietary tracking system that has gained a tremendous popularity in the last years and offers unique features like the project management system and the Jira agile kanban board. This paper presents a dataset extracted from the Jira ITS of four popular open source ecosystems (as well as the tools and infrastructure used for extraction) the Apache Software Foundation, Spring, JBoss and CodeHaus communities. Our dataset hosts more than 1K projects, containing more than 700K issue reports and more than 2 million issue comments. Using this data, we have been able to deeply study the communication process among developers, and how this aspect affects the development process. Furthermore, comments posted by developers contain not only technical information, but also valuable information about sentiments and emotions. Since sentiment analysis and human aspects in software engineering are gaining more and more importance in the last years, with this repository we would like to encourage further studies in this direction.", "title": "" }, { "docid": "neg:1840286_13", "text": "Folded-plate structures provide an efficient design using thin laminated veneer lumber panels. Inspired by Japanese furniture joinery, the multiple tab-and-slot joint was developed for the multi-assembly of timber panels with non-parallel edges without adhesive or metal joints. Because the global analysis of our origami structures reveals that the rotational stiffness at ridges affects the global behaviour, we propose an experimental and numerical study of this linear interlocking connection. Its geometry is governed by three angles that orient the contact faces. Nine combinations of these angles were tested and the rotational slip was measured with two different bending set-ups: closing or opening the fold formed by two panels. The non-linear behaviour was conjointly reproduced numerically using the finite element method and continuum damage mechanics.", "title": "" }, { "docid": "neg:1840286_14", "text": "Digital image editing is usually an iterative process; users repetitively perform short sequences of operations, as well as undo and redo using history navigation tools. In our collected data, undo, redo and navigation constitute about 9 percent of the total commands and consume a significant amount of user time. Unfortunately, such activities also tend to be tedious and frustrating, especially for complex projects.\n We address this crucial issue by adaptive history, a UI mechanism that groups relevant operations together to reduce user workloads. Such grouping can occur at various history granularities. We present two that have been found to be most useful. On a fine level, we group repeating commands patterns together to facilitate smart undo. On a coarse level, we segment commands history into chunks for semantic navigation. The main advantages of our approach are that it is intuitive to use and easy to integrate into any existing tools with text-based history lists. Unlike prior methods that are predominately rule based, our approach is data driven, and thus adapts better to common editing tasks which exhibit sufficient diversity and complexity that may defy predetermined rules or procedures.\n A user study showed that our system performs quantitatively better than two other baselines, and the participants also gave positive qualitative feedbacks on the system features.", "title": "" }, { "docid": "neg:1840286_15", "text": "Many event monitoring systems rely on counting known keywords in streaming text data to detect sudden spikes in frequency. But the dynamic and conversational nature of Twitter makes it hard to select known keywords for monitoring. Here we consider a method of automatically finding noun phrases (NPs) as keywords for event monitoring in Twitter. Finding NPs has two aspects, identifying the boundaries for the subsequence of words which represent the NP, and classifying the NP to a specific broad category such as politics, sports, etc. To classify an NP, we define the feature vector for the NP using not just the words but also the author's behavior and social activities. Our results show that we can classify many NPs by using a sample of training data from a knowledge-base.", "title": "" }, { "docid": "neg:1840286_16", "text": "osting by E Abstract Extraction–transformation–loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, its cleansing, customization, reformatting, integration, and insertion into a data warehouse. Building the ETL process is potentially one of the biggest tasks of building a warehouse; it is complex, time consuming, and consume most of data warehouse project’s implementation efforts, costs, and resources. Building a data warehouse requires focusing closely on understanding three main areas: the source area, the destination area, and the mapping area (ETL processes). The source area has standard models such as entity relationship diagram, and the destination area has standard models such as star schema, but the mapping area has not a standard model till now. In spite of the importance of ETL processes, little research has been done in this area due to its complexity. There is a clear lack of a standard model that can be used to represent the ETL scenarios. In this paper we will try to navigate through the efforts done to conceptualize", "title": "" }, { "docid": "neg:1840286_17", "text": "We demonstrate a multimodal dialogue system using reinforcement learning for in-car scenarios, developed at Edinburgh University and Cambridge University for the TALK project1. This prototype is the first “Information State Update” (ISU) dialogue system to exhibit reinforcement learning of dialogue strategies, and also has a fragmentary clarification feature. This paper describes the main components and functionality of the system, as well as the purposes and future use of the system, and surveys the research issues involved in its construction. Evaluation of this system (i.e. comparing the baseline system with handcoded vs. learnt dialogue policies) is ongoing, and the demonstration will show both.", "title": "" }, { "docid": "neg:1840286_18", "text": "Smart cities play an increasingly important role for the sustainable economic development of a determined area. Smart cities are considered a key element for generating wealth, knowledge and diversity, both economically and socially. A Smart City is the engine to reach the sustainability of its infrastructure and facilitate the sustainable development of its industry, buildings and citizens. The first goal to reach that sustainability is reduce the energy consumption and the levels of greenhouse gases (GHG). For that purpose, it is required scalability, extensibility and integration of new resources in order to reach a higher awareness about the energy consumption, distribution and generation, which allows a suitable modeling which can enable new countermeasure and action plans to mitigate the current excessive power consumption effects. Smart Cities should offer efficient support for global communications and access to the services and information. It is required to enable a homogenous and seamless machine to machine (M2M) communication in the different solutions and use cases. This work presents how to reach an interoperable Smart Lighting solution over the emerging M2M protocols such as CoAP built over REST architecture. This follows up the guidelines defined by the IP for Smart Objects Alliance (IPSO Alliance) in order to implement and interoperable semantic level for the street lighting, and describes the integration of the communications and logic over the existing street lighting infrastructure.", "title": "" } ]
1840287
Non-Negative Matrix Factorization Revisited: Uniqueness and Algorithm for Symmetric Decomposition
[ { "docid": "pos:1840287_0", "text": "Nonnegative matrix factorization (NMF) and its extensions such as Nonnegative Tensor Factorization (NTF) have b ecome prominent techniques for blind sources separation (BSS), analys is of image databases, data mining and other information retrieval and clustering applications. In this paper we propose a family of e fficient algorithms for NMF/NTF, as well as sparse nonnegative coding and representatio n, that has many potential applications in computational neur oscience, multisensory processing, compressed sensing and multidimensio nal data analysis. We have developed a class of optimized local algorithm s which are referred to as Hierarchical Alternating Least Squares (HAL S) algorithms. For these purposes, we have performed sequential constrain ed minimization on a set of squared Euclidean distances. We then extend t his approach to robust cost functions using the Alpha and Beta divergence s and derive flexible update rules. Our algorithms are locally stable and work well for NMF-based blind source separation (BSS) not only for the ove r-determined case but also for an under-determined (over-complete) case (i.e., for a system which has less sensors than sources) if data are su fficiently sparse. The NMF learning rules are extended and generalized for N-th order nonnegative tensor factorization (NTF). Moreover, these algorit hms can be tuned to different noise statistics by adjusting a single parameter. Ext ensive experimental results confirm the accuracy and computational p erformance of the developed algorithms, especially, with usage of multilayer hierarchical NMF approach [3]. key words: Nonnegative matrix factorization (NMF), nonnegative tensor factorizations (NTF), nonnegative PARAFAC, model reduction, feature extraction, compression, denoising, multiplicative local learning (adaptive) algorithms, Alpha and Beta divergences.", "title": "" } ]
[ { "docid": "neg:1840287_0", "text": "Latent Dirichlet Allocation (LDA) mining thematic structure of documents plays an important role in nature language processing and machine learning areas. However, the probability distribution from LDA only describes the statistical relationship of occurrences in the corpus and usually in practice, probability is not the best choice for feature representations. Recently, embedding methods have been proposed to represent words and documents by learning essential concepts and representations, such as Word2Vec and Doc2Vec. The embedded representations have shown more effectiveness than LDA-style representations in many tasks. In this paper, we propose the Topic2Vec approach which can learn topic representations in the same semantic vector space with words, as an alternative to probability distribution. The experimental results show that Topic2Vec achieves interesting and meaningful results.", "title": "" }, { "docid": "neg:1840287_1", "text": "We introduce an algorithm for tracking deformable objects from a sequence of point clouds. The proposed tracking algorithm is based on a probabilistic generative model that incorporates observations of the point cloud and the physical properties of the tracked object and its environment. We propose a modified expectation maximization algorithm to perform maximum a posteriori estimation to update the state estimate at each time step. Our modification makes it practical to perform the inference through calls to a physics simulation engine. This is significant because (i) it allows for the use of highly optimized physics simulation engines for the core computations of our tracking algorithm, and (ii) it makes it possible to naturally, and efficiently, account for physical constraints imposed by collisions, grasping actions, and material properties in the observation updates. Even in the presence of the relatively large occlusions that occur during manipulation tasks, our algorithm is able to robustly track a variety of types of deformable objects, including ones that are one-dimensional, such as ropes; two-dimensional, such as cloth; and three-dimensional, such as sponges. Our implementation can track these objects in real time.", "title": "" }, { "docid": "neg:1840287_2", "text": "The authors review evidence that self-control may consume a limited resource. Exerting self-control may consume self-control strength, reducing the amount of strength available for subsequent self-control efforts. Coping with stress, regulating negative affect, and resisting temptations require self-control, and after such self-control efforts, subsequent attempts at self-control are more likely to fail. Continuous self-control efforts, such as vigilance, also degrade over time. These decrements in self-control are probably not due to negative moods or learned helplessness produced by the initial self-control attempt. These decrements appear to be specific to behaviors that involve self-control; behaviors that do not require self-control neither consume nor require self-control strength. It is concluded that the executive component of the self--in particular, inhibition--relies on a limited, consumable resource.", "title": "" }, { "docid": "neg:1840287_3", "text": "OBJECTIVES:Biliary cannulation is frequently the most difficult component of endoscopic retrograde cholangiopancreatography (ERCP). Techniques employed to improve safety and efficacy include wire-guided access and the use of sphincterotomes. However, a variety of options for these techniques are available and optimum strategies are not defined. We assessed whether the use of endoscopist- vs. assistant-controlled wire guidance and small vs. standard-diameter sphincterotomes improves safety and/or efficacy of bile duct cannulation.METHODS:Patients were randomized using a 2 × 2 factorial design to initial cannulation attempt with endoscopist- vs. assistant-controlled wire systems (1:1 ratio) and small (3.9Fr tip) vs. standard (4.4Fr tip) sphincterotomes (1:1 ratio). The primary efficacy outcome was successful deep bile duct cannulation within 8 attempts. Sample size of 498 was planned to demonstrate a significant increase in cannulation of 10%. Interim analysis was planned after 200 patients–with a stopping rule pre-defined for a significant difference in the composite safety end point (pancreatitis, cholangitis, bleeding, and perforation).RESULTS:The study was stopped after the interim analysis, with 216 patients randomized, due to a significant difference in the safety end point with endoscopist- vs. assistant-controlled wire guidance (3/109 (2.8%) vs. 12/107 (11.2%), P=0.016), primarily due to a lower rate of post-ERCP pancreatitis (3/109 (2.8%) vs. 10/107 (9.3%), P=0.049). The difference in successful biliary cannulation for endoscopist- vs. assistant-controlled wire guidance was −0.5% (95% CI−12.0 to 11.1%) and for small vs. standard sphincerotome −0.9% (95% CI–12.5 to 10.6%).CONCLUSIONS:Use of the endoscopist- rather than assistant-controlled wire guidance for bile duct cannulation reduces complications of ERCP such as pancreatitis.", "title": "" }, { "docid": "neg:1840287_4", "text": "Synthetic aperture radar (SAR) raw signal simulation is a powerful tool for designing new sensors, testing processing algorithms, planning missions, and devising inversion algorithms. In this paper, a spotlight SAR raw signal simulator for distributed targets is presented. The proposed procedure is based on a Fourier domain analysis: a proper analytical reformulation of the spotlight SAR raw signal expression is presented. It is shown that this reformulation allows us to design a very efficient simulation scheme that employs fast Fourier transform codes. Accordingly, the computational load is dramatically reduced with respect to a time-domain simulation and this, for the first time, makes spotlight simulation of extended scenes feasible.", "title": "" }, { "docid": "neg:1840287_5", "text": "To the best of our knowledge, we present the first hardware implementation of isogeny-based cryptography available in the literature. Particularly, we present the first implementation of the supersingular isogeny Diffie-Hellman (SIDH) key exchange, which features quantum-resistance. We optimize this design for speed by creating a high throughput multiplier unit, taking advantage of parallelization of arithmetic in $\\mathbb {F}_{p^{2}}$ , and minimizing pipeline stalls with optimal scheduling. Consequently, our results are also faster than software libraries running affine SIDH even on Intel Haswell processors. For our implementation at 85-bit quantum security and 128-bit classical security, we generate ephemeral public keys in 1.655 million cycles for Alice and 1.490 million cycles for Bob. We generate the shared secret in an additional 1.510 million cycles for Alice and 1.312 million cycles for Bob. On a Virtex-7, these results are approximately 1.5 times faster than known software implementations running the same 512-bit SIDH. Our results and observations show that the isogeny-based schemes can be implemented with high efficiency on reconfigurable hardware.", "title": "" }, { "docid": "neg:1840287_6", "text": "The effects of iron substitution on the structural and magnetic properties of the GdCo(12-x)Fe(x)B6 (0 ≤ x ≤ 3) series of compounds have been studied. All of the compounds form in the rhombohedral SrNi12B6-type structure and exhibit ferrimagnetic behaviour below room temperature: T(C) decreases from 158 K for x = 0 to 93 K for x = 3. (155)Gd Mössbauer spectroscopy indicates that the easy magnetization axis changes from axial to basal-plane upon substitution of Fe for Co. This observation has been confirmed using neutron powder diffraction. The axial to basal-plane transition is remarkably sensitive to the Fe content and comparison with earlier (57)Fe-doping studies suggests that the boundary lies below x = 0.1.", "title": "" }, { "docid": "neg:1840287_7", "text": "Mining topics in Twitter is increasingly attracting more attention. However, the shortness and informality of tweets leads to extreme sparse vector representation with a large vocabulary, which makes the conventional topic models (e.g., Latent Dirichlet Allocation) often fail to achieve high quality underlying topics. Luckily, tweets always show up with rich user-generated hash tags as keywords. In this paper, we propose a novel topic model to handle such semi-structured tweets, denoted as Hash tag Graph based Topic Model (HGTM). By utilizing relation information between hash tags in our hash tag graph, HGTM establishes word semantic relations, even if they haven't co-occurred within a specific tweet. In addition, we enhance the dependencies of both multiple words and hash tags via latent variables (topics) modeled by HGTM. We illustrate that the user-contributed hash tags could serve as weakly-supervised information for topic modeling, and hash tag relation could reveal the semantic relation between tweets. Experiments on a real-world twitter data set show that our model provides an effective solution to discover more distinct and coherent topics than the state-of-the-art baselines and has a strong ability to control sparseness and noise in tweets.", "title": "" }, { "docid": "neg:1840287_8", "text": "We present a method based on header paths for efficient and complete extraction of labeled data from tables meant for humans. Although many table configurations yield to the proposed syntactic analysis, some require access to semantic knowledge. Clicking on one or two critical cells per table, through a simple interface, is sufficient to resolve most of these problem tables. Header paths, a purely syntactic representation of visual tables, can be transformed (\"factored\") into existing representations of structured data such as category trees, relational tables, and RDF triples. From a random sample of 200 web tables from ten large statistical web sites, we generated 376 relational tables and 34,110 subject-predicate-object RDF triples.", "title": "" }, { "docid": "neg:1840287_9", "text": "Copyright (©) 1999–2003 R Foundation for Statistical Computing. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions, except that this permission notice may be stated in a translation approved by the R Development Core Team.", "title": "" }, { "docid": "neg:1840287_10", "text": "0957-4174/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.eswa.2010.10.024 E-mail addresses: cagatay.catal@bte.mam.gov.tr, ca Software engineering discipline contains several prediction approaches such as test effort prediction, correction cost prediction, fault prediction, reusability prediction, security prediction, effort prediction, and quality prediction. However, most of these prediction approaches are still in preliminary phase and more research should be conducted to reach robust models. Software fault prediction is the most popular research area in these prediction approaches and recently several research centers started new projects on this area. In this study, we investigated 90 software fault prediction papers published between year 1990 and year 2009 and then we categorized these papers according to the publication year. This paper surveys the software engineering literature on software fault prediction and both machine learning based and statistical based approaches are included in this survey. Papers explained in this article reflect the outline of what was published so far, but naturally this is not a complete review of all the papers published so far. This paper will help researchers to investigate the previous studies from metrics, methods, datasets, performance evaluation metrics, and experimental results perspectives in an easy and effective manner. Furthermore, current trends are introduced and discussed. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840287_11", "text": "BACKGROUND\nPromotion and provision of low-cost technologies that enable improved water, sanitation, and hygiene (WASH) practices are seen as viable solutions for reducing high rates of morbidity and mortality due to enteric illnesses in low-income countries. A number of theoretical models, explanatory frameworks, and decision-making models have emerged which attempt to guide behaviour change interventions related to WASH. The design and evaluation of such interventions would benefit from a synthesis of this body of theory informing WASH behaviour change and maintenance.\n\n\nMETHODS\nWe completed a systematic review of existing models and frameworks through a search of related articles available in PubMed and in the grey literature. Information on the organization of behavioural determinants was extracted from the references that fulfilled the selection criteria and synthesized. Results from this synthesis were combined with other relevant literature, and from feedback through concurrent formative and pilot research conducted in the context of two cluster-randomized trials on the efficacy of WASH behaviour change interventions to inform the development of a framework to guide the development and evaluation of WASH interventions: the Integrated Behavioural Model for Water, Sanitation, and Hygiene (IBM-WASH).\n\n\nRESULTS\nWe identified 15 WASH-specific theoretical models, behaviour change frameworks, or programmatic models, of which 9 addressed our review questions. Existing models under-represented the potential role of technology in influencing behavioural outcomes, focused on individual-level behavioural determinants, and had largely ignored the role of the physical and natural environment. IBM-WASH attempts to correct this by acknowledging three dimensions (Contextual Factors, Psychosocial Factors, and Technology Factors) that operate on five-levels (structural, community, household, individual, and habitual).\n\n\nCONCLUSIONS\nA number of WASH-specific models and frameworks exist, yet with some limitations. The IBM-WASH model aims to provide both a conceptual and practical tool for improving our understanding and evaluation of the multi-level multi-dimensional factors that influence water, sanitation, and hygiene practices in infrastructure-constrained settings. We outline future applications of our proposed model as well as future research priorities needed to advance our understanding of the sustained adoption of water, sanitation, and hygiene technologies and practices.", "title": "" }, { "docid": "neg:1840287_12", "text": "Ulric Neisser (Chair) Gwyneth Boodoo Thomas J. Bouchard, Jr. A. Wade Boykin Nathan Brody Stephen J. Ceci Diane E Halpern John C. Loehlin Robert Perloff Robert J. Sternberg Susana Urbina Emory University Educational Testing Service, Princeton, New Jersey University of Minnesota, Minneapolis Howard University Wesleyan University Cornell University California State University, San Bernardino University of Texas, Austin University of Pittsburgh Yale University University of North Florida", "title": "" }, { "docid": "neg:1840287_13", "text": "We describe a formally well founded approach to link data and processes conceptually, based on adopting UML class diagrams to represent data, and BPMN to represent the process. The UML class diagram together with a set of additional process variables, called Artifact, form the information model of the process. All activities of the BPMN process refer to such an information model by means of OCL operation contracts. We show that the resulting semantics while abstract is fully executable. We also provide an implementation of the executor.", "title": "" }, { "docid": "neg:1840287_14", "text": "The problem of feature selection is a difficult combinatorial task in Machine Learning and of high practical relevance, e.g. in bioinformatics. Genetic Algorithms (GAs) of fer a natural way to solve this problem. In this paper we present a special Genetic Algorithm, which especially take s into account the existing bounds on the generalization erro r for Support Vector Machines (SVMs). This new approach is compared to the traditional method of performing crossvalidation and to other existing algorithms for feature selection.", "title": "" }, { "docid": "neg:1840287_15", "text": "We have developed Cu-Cu/adhesives hybrid bonding technique by using collective cutting of Cu bumps and adhesives in order to achieve high density 2.5D/3D integration. It is considered that progression of high density interconnection leads to lower height of bonding electrodes, resulting in narrow gap between ICs. Therefore, it is difficult to fill in adhesive to such a narrow gap ICs after bonding. Thus, we consider that hybrid bonding of pre-applied adhesives and Cu-Cu thermocompression bonding must be advantageous, in terms of void less bonding and minimizing bonding stress by adhesives and also low electricity by Cu-Cu solid diffusion bonding. In the present study, we adapted the following process; at first adhesives were spin coated on the wafer with Cu post and then pre-baked. After that, pre-applied adhesives and Cu bumps were simultaneously cut by single crystal diamond bite. We found that both adhesives and Cu post surfaces after cutting have highly smooth surface less than 10nm, and dishing phenomena, which might be occurred in typical CMP process, could not be seen on the cut Cu post/ adhesives surfaces.", "title": "" }, { "docid": "neg:1840287_16", "text": "Recent trends on how video games are played have pushed for the need to revise the game engine architecture. Indeed, game players are more mobile, using smartphones and tablets that lack CPU resources compared to PC and dedicated box. Two emerging solutions, cloud gaming and computing offload, would represent the next steps toward improving game player experience. By consequence, dissecting and analyzing game engines performances would help to better understand how to move to these new directions, which is so far missing in the literature. In this paper, we fill this gap by analyzing and evaluating one of the most popular game engine, namely Unity3D. First, we dissected the Unity3D architecture and modules. A benchmark was then used to evaluate the CPU and GPU performances of the different modules constituting Unity3D, for five representative games.", "title": "" }, { "docid": "neg:1840287_17", "text": "The aim of this review is to document the advantages of exclusive breastfeeding along with concerns which may hinder the practice of breastfeeding and focuses on the appropriateness of complementary feeding and feeding difficulties which infants encounter. Breastfeeding, as recommended by the World Health Organisation, is the most cost effective way for reducing childhood morbidity such as obesity, hypertension and gastroenteritis as well as mortality. There are several factors that either promote or act as barriers to good infant nutrition. Factors which influence breastfeeding practice in terms of initiation, exclusivity and duration are namely breast engorgement, sore nipples, milk insufficiency and availability of various infant formulas. On the other hand, introduction of complementary foods, also known as weaning, is done around 4 to 6 months and mothers usually should start with home-made nutritious food. Difficulties encountered during the weaning process are often refusal to eat followed by vomiting, colic, allergic reactions and diarrhoea. key words: Exclusive breastfeeding, Weaning, Complementary feeding, Feeding difficulties.", "title": "" }, { "docid": "neg:1840287_18", "text": "We propose randomized least-squares value iteration (RLSVI) – a new reinforcement learning algorithm designed to explore and generalize efficiently via linearly parameterized value functions. We explain why versions of least-squares value iteration that use Boltzmann or -greedy exploration can be highly inefficient, and we present computational results that demonstrate dramatic efficiency gains enjoyed by RLSVI. Further, we establish an upper bound on the expected regret of RLSVI that demonstrates nearoptimality in a tabula rasa learning context. More broadly, our results suggest that randomized value functions offer a promising approach to tackling a critical challenge in reinforcement learning: synthesizing efficient exploration and effective generalization.", "title": "" }, { "docid": "neg:1840287_19", "text": "BACKGROUND\nHigh-resolution MRI has been shown to be capable of identifying plaque constituents, such as the necrotic core and intraplaque hemorrhage, in human carotid atherosclerosis. The purpose of this study was to evaluate differential contrast-weighted images, specifically a multispectral MR technique, to improve the accuracy of identifying the lipid-rich necrotic core and acute intraplaque hemorrhage in vivo.\n\n\nMETHODS AND RESULTS\nEighteen patients scheduled for carotid endarterectomy underwent a preoperative carotid MRI examination in a 1.5-T GE Signa scanner using a protocol that generated 4 contrast weightings (T1, T2, proton density, and 3D time of flight). MR images of the vessel wall were examined for the presence of a lipid-rich necrotic core and/or intraplaque hemorrhage. Ninety cross sections were compared with matched histological sections of the excised specimen in a double-blinded fashion. Overall accuracy (95% CI) of multispectral MRI was 87% (80% to 94%), sensitivity was 85% (78% to 92%), and specificity was 92% (86% to 98%). There was good agreement between MRI and histological findings, with a value of kappa=0.69 (0.53 to 0.85).\n\n\nCONCLUSIONS\nMultispectral MRI can identify the lipid-rich necrotic core in human carotid atherosclerosis in vivo with high sensitivity and specificity. This MRI technique provides a noninvasive tool to study the pathogenesis and natural history of carotid atherosclerosis. Furthermore, it will permit a direct assessment of the effect of pharmacological therapy, such as aggressive lipid lowering, on plaque lipid composition.", "title": "" } ]
1840288
Patch-Based Near-Optimal Image Denoising
[ { "docid": "pos:1840288_0", "text": "In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a pre-specified set of linear transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method – the K-SVD algorithm – generalizing the K-Means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real image data.", "title": "" }, { "docid": "pos:1840288_1", "text": "This paper presents an efficient image denoising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). For a better preservation of image local structures, a pixel and its nearest neighbors are modeled as a vector variable, whose training samples are selected from the local window by using block matching based LPG. Such an LPG procedure guarantees that only the sample blocks with similar contents are used in the local statistics calculation for PCA transform estimation, so that the image local features can be well preserved after coefficient shrinkage in the PCA domain to remove the noise. The LPG-PCA denoising procedure is iterated one more time to further improve the denoising performance, and the noise level is adaptively adjusted in the second stage. Experimental results on benchmark test images demonstrate that the LPG-PCA method achieves very competitive denoising performance, especially in image fine structure preservation, compared with state-of-the-art denoising algorithms. & 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "pos:1840288_2", "text": "We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call \"groups.\" Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.", "title": "" } ]
[ { "docid": "neg:1840288_0", "text": "A converter system comprising series-connected converter submodules based on medium-frequency (MF)-DC/DC converters can replace the conventional traction transformer. This reduces mass and losses. The use of multilevel topology permits the connection to the HV catenary. A suitably chosen DC link voltage avoids excessively oversizing the power semiconductors, while providing sufficient redundancy. The medium-frequency switching performance of typical and dedicated 6.5kV IGBTs has been characterized and is discussed here (→ZCS, ZVS).", "title": "" }, { "docid": "neg:1840288_1", "text": "Automated static analysis can identify potential source code anomalies early in the software process that could lead to field failures. However, only a small portion of static analysis alerts may be important to the developer (actionable). The remainder are false positives (unactionable). We propose a process for building false positive mitigation models to classify static analysis alerts as actionable or unactionable using machine learning techniques. For two open source projects, we identify sets of alert characteristics predictive of actionable and unactionable alerts out of 51 candidate characteristics. From these selected characteristics, we evaluate 15 machine learning algorithms, which build models to classify alerts. We were able to obtain 88-97% average accuracy for both projects in classifying alerts using three to 14 alert characteristics. Additionally, the set of selected alert characteristics and best models differed between the two projects, suggesting that false positive mitigation models should be project-specific.", "title": "" }, { "docid": "neg:1840288_2", "text": "We provide a comparative analysis of the existing MITM (Man-In-The-Middle) attacks on Bluetooth. In addition, we propose a novel Bluetooth MITM attack against Bluetooth- enabled printers that support SSP (Secure Simple Pairing). Our attack is based on the fact that the security of the protocol is likely to be limited by the capabilities of the least powerful or the least secure device type. Moreover, we propose improvements to the existing Bluetooth SSP in order to make it more secure.", "title": "" }, { "docid": "neg:1840288_3", "text": "The aim in high-resolution connectomics is to reconstruct complete neuronal connectivity in a tissue. Currently, the only technology capable of resolving the smallest neuronal processes is electron microscopy (EM). Thus, a common approach to network reconstruction is to perform (error-prone) automatic segmentation of EM images, followed by manual proofreading by experts to fix errors. We have developed an algorithm and software library to not only improve the accuracy of the initial automatic segmentation, but also point out the image coordinates where it is likely to have made errors. Our software, called gala (graph-based active learning of agglomeration), improves the state of the art in agglomerative image segmentation. It is implemented in Python and makes extensive use of the scientific Python stack (numpy, scipy, networkx, scikit-learn, scikit-image, and others). We present here the software architecture of the gala library, and discuss several designs that we consider would be generally useful for other segmentation packages. We also discuss the current limitations of the gala library and how we intend to address them.", "title": "" }, { "docid": "neg:1840288_4", "text": "The present study examined (1) the impact of a brief substance use intervention on delay discounting and indices of substance reward value (RV), and (2) whether baseline values and posttreatment change in these behavioral economic variables predict substance use outcomes. Participants were 97 heavy drinking college students (58.8% female, 41.2% male) who completed a brief motivational intervention (BMI) and then were randomized to one of two conditions: a supplemental behavioral economic intervention that attempted to increase engagement in substance-free activities associated with delayed rewards (SFAS) or an Education control (EDU). Demand intensity, and Omax, decreased and elasticity significantly increased after treatment, but there was no effect for condition. Both baseline values and change in RV, but not discounting, predicted substance use outcomes at 6-month follow-up. Students with high RV who used marijuana were more likely to reduce their use after the SFAS intervention. These results suggest that brief interventions may reduce substance reward value, and that changes in reward value are associated with subsequent drinking and drug use reductions. High RV marijuana users may benefit from intervention elements that enhance future time orientation and substance-free activity participation.", "title": "" }, { "docid": "neg:1840288_5", "text": "To uncover regulatory mechanisms in Hedgehog (Hh) signaling, we conducted genome-wide screens to identify positive and negative pathway components and validated top hits using multiple signaling and differentiation assays in two different cell types. Most positive regulators identified in our screens, including Rab34, Pdcl, and Tubd1, were involved in ciliary functions, confirming the central role for primary cilia in Hh signaling. Negative regulators identified included Megf8, Mgrn1, and an unannotated gene encoding a tetraspan protein we named Atthog. The function of these negative regulators converged on Smoothened (SMO), an oncoprotein that transduces the Hh signal across the membrane. In the absence of Atthog, SMO was stabilized at the cell surface and concentrated in the ciliary membrane, boosting cell sensitivity to the ligand Sonic Hedgehog (SHH) and consequently altering SHH-guided neural cell-fate decisions. Thus, we uncovered genes that modify the interpretation of morphogen signals by regulating protein-trafficking events in target cells.", "title": "" }, { "docid": "neg:1840288_6", "text": "Software development is a people intensive activity. The abilities possessed by developers are strongly related to process productivity and final product quality. Thus, one of the most important decisions to be made by a software project manager is how to properly staff the project. However, staffing software projects is not a simple task. There are many alternatives to ponder, several developer-to-activity combinations to evaluate, and the manager may have to choose a team from a larger set of available developers, according to the project and organizational needs. Therefore, to perform the staffing activity with ad hoc procedures can be very difficult and can lead the manager to choose a team that is not the best for a given situation. This work presents an optimization-based approach to support staffing a software project. The staffing problem is modeled and solved as a constraint satisfaction problem. Our approach takes into account the characteristics of the project activities, the available human resources, and constraints established by the software development organization. According to these needs, the project manager selects a utility function to be maximized or minimized by the optimizer. We propose several utility functions, each addressing values that can be sought by the development organization. A decision support tool was implemented and used in an experimental study executed to evaluate the relevance of the proposed approach. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840288_7", "text": "OBJECTIVE\nEffect of peppermint on exercise performance was previously investigated but equivocal findings exist. This study aimed to investigate the effects of peppermint ingestion on the physiological parameters and exercise performance after 5 min and 1 h.\n\n\nMATERIALS AND METHODS\nThirty healthy male university students were randomly divided into experimental (n=15) and control (n=15) groups. Maximum isometric grip force, vertical and long jumps, spirometric parameters, visual and audio reaction times, blood pressure, heart rate, and breath rate were recorded three times: before, five minutes, and one hour after single dose oral administration of peppermint essential oil (50 µl). Data were analyzed using repeated measures ANOVA.\n\n\nRESULTS\nOur results revealed significant improvement in all of the variables after oral administration of peppermint essential oil. Experimental group compared with control group showed an incremental and a significant increase in the grip force (36.1%), standing vertical jump (7.0%), and standing long jump (6.4%). Data obtained from the experimental group after five minutes exhibited a significant increase in the forced vital capacity in first second (FVC1)(35.1%), peak inspiratory flow rate (PIF) (66.4%), and peak expiratory flow rate (PEF) (65.1%), whereas after one hour, only PIF shown a significant increase as compare with the baseline and control group. At both times, visual and audio reaction times were significantly decreased. Physiological parameters were also significantly improved after five minutes. A considerable enhancement in the grip force, spiromery, and other parameters were the important findings of this study. Conclusion : An improvement in the spirometric measurements (FVC1, PEF, and PIF) might be due to the peppermint effects on the bronchial smooth muscle tonicity with or without affecting the lung surfactant. Yet, no scientific evidence exists regarding isometric force enhancement in this novel study.", "title": "" }, { "docid": "neg:1840288_8", "text": "In this paper, a changeable winding brushless DC (BLDC) motor for the expansion of the speed region is described. The changeable winding BLDC motor is driven by a large number of phase turns at low speeds and by a reduced number of turns at high speeds. For this reason, the section where the winding changes is very important. Ideally, the time at which the windings are to be converted should be same as the time at which the voltage changes. However, if this timing is not exactly synchronized, a large current is generated in the motor, and the demagnetization of the permanent magnet occurs. In addition, a large torque ripple is produced. In this paper, we describe the demagnetization of the permanent magnet in a fault situation when the windings change, and we suggest a design process to solve this problem.", "title": "" }, { "docid": "neg:1840288_9", "text": "In this paper, we present results of an empirical investigation into the social structure of YouTube, addressing friend relations and their correlation with tags applied to uploaded videos. Results indicate that YouTube producers are strongly linked to others producing similar content. Furthermore, there is a socially cohesive core of producers of mixed content, with smaller cohesive groups around Korean music video and anime music videos. Thus, social interaction on YouTube appears to be structured in ways similar to other social networking sites, but with greater semantic coherence around content. These results are explained in terms of the relationship of video producers to the tagging of uploaded content on the site.", "title": "" }, { "docid": "neg:1840288_10", "text": "Understanding search queries is a hard problem as it involves dealing with “word salad” text ubiquitously issued by users. However, if a query resembles a well-formed question, a natural language processing pipeline is able to perform more accurate interpretation, thus reducing downstream compounding errors. Hence, identifying whether or not a query is well formed can enhance query understanding. Here, we introduce a new task of identifying a well-formed natural language question. We construct and release a dataset of 25,100 publicly available questions classified into well-formed and non-wellformed categories and report an accuracy of 70.7% on the test set. We also show that our classifier can be used to improve the performance of neural sequence-to-sequence models for generating questions for reading comprehension.", "title": "" }, { "docid": "neg:1840288_11", "text": "The task of generating natural images from 3D scenes has been a long standing goal in computer graphics. On the other hand, recent developments in deep neural networks allow for trainable models that can produce natural-looking images with little or no knowledge about the scene structure. While the generated images often consist of realistic looking local patterns, the overall structure of the generated images is often inconsistent. In this work we propose a trainable, geometry-aware image generation method that leverages various types of scene information, including geometry and segmentation, to create realistic looking natural images that match the desired scene structure. Our geometrically-consistent image synthesis method is a deep neural network, called Geometry to Image Synthesis (GIS) framework, which retains the advantages of a trainable method, e.g., differentiability and adaptiveness, but, at the same time, makes a step towards the generalizability, control and quality output of modern graphics rendering engines. We utilize the GIS framework to insert vehicles in outdoor driving scenes, as well as to generate novel views of objects from the Linemod dataset. We qualitatively show that our network is able to generalize beyond the training set to novel scene geometries, object shapes and segmentations. Furthermore, we quantitatively show that the GIS framework can be used to synthesize large amounts of training data which proves beneficial for training instance segmentation models.", "title": "" }, { "docid": "neg:1840288_12", "text": "Today&apos;s interconnected computer network is complex and is constantly growing in size . As per OWASP Top10 list 2013[1] the top vulnerability in web application is listed as injection attack. SQL injection[2] is the most dangerous attack among injection attacks. Most of the available techniques provide an incomplete solution. While attacking using SQL injection attacker probably use space, single quotes or double dashes in his input so as to change the indented meaning of the runtime query generated based on these inputs. Stored procedure based and second order SQL injection are two types of SQL injection that are difficult to detect and hence difficult to prevent. This work concentrates on Stored procedure based and second", "title": "" }, { "docid": "neg:1840288_13", "text": "Identifying sparse salient structures from dense pixels is a longstanding problem in visual computing. Solutions to this problem can benefit both image manipulation and understanding. In this paper, we introduce an image transform based on the L1 norm for piecewise image flattening. This transform can effectively preserve and sharpen salient edges and contours while eliminating insignificant details, producing a nearly piecewise constant image with sparse structures. A variant of this image transform can perform edge-preserving smoothing more effectively than existing state-of-the-art algorithms. We further present a new method for complex scene-level intrinsic image decomposition. Our method relies on the above image transform to suppress surface shading variations, and perform probabilistic reflectance clustering on the flattened image instead of the original input image to achieve higher accuracy. Extensive testing on the Intrinsic-Images-in-the-Wild database indicates our method can perform significantly better than existing techniques both visually and numerically. The obtained intrinsic images have been successfully used in two applications, surface retexturing and 3D object compositing in photographs.", "title": "" }, { "docid": "neg:1840288_14", "text": "Disaster relief operations rely on the rapid deployment of wireless network architectures to provide emergency communications. Future emergency networks will consist typically of terrestrial, portable base stations and base stations on-board low altitude platforms (LAPs). The effectiveness of network deployment will depend on strategically chosen station positions. In this paper a method is presented for calculating the optimal proportion of the two station types and their optimal placement. Random scenarios and a real example from Hurricane Katrina are used for evaluation. The results confirm the strength of LAPs in terms of high bandwidth utilisation, achieved by their ability to cover wide areas, their portability and adaptability to height. When LAPs are utilized, the total required number of base stations to cover a desired area is generally lower. For large scale disasters in particular, this leads to shorter response times and the requirement of fewer resources. This goal can be achieved more easily if algorithms such as the one presented in this paper are used.", "title": "" }, { "docid": "neg:1840288_15", "text": "This study used eye-tracking technology to assess where helpers look as they are providing assistance to a worker during collaborative physical tasks. Gaze direction was coded into one of six categories: partner's head, partner's hands, task parts and tools, the completed task, and instruction manual. Results indicated that helpers rarely gazed at their partners' faces, but distributed gaze fairly evenly across the other targets. The results have implications for the design of video systems to support collaborative physical tasks.", "title": "" }, { "docid": "neg:1840288_16", "text": "In this review, we collate information about ticks identified in different parts of the Sudan and South Sudan since 1956 in order to identify gaps in tick prevalence and create a map of tick distribution. This will avail basic data for further research on ticks and policies for the control of tick-borne diseases. In this review, we discuss the situation in the Republic of South Sudan as well as Sudan. For this purpose we have divided Sudan into four regions, namely northern Sudan (Northern and River Nile states), central Sudan (Khartoum, Gazera, White Nile, Blue Nile and Sennar states), western Sudan (North and South Kordofan and North, South and West Darfour states) and eastern Sudan (Red Sea, Kassala and Gadarif states).", "title": "" }, { "docid": "neg:1840288_17", "text": "MicroRNAs (miRNAs) have within the past decade emerged as key regulators of metabolic homoeostasis. Major tissues in intermediary metabolism important during development of the metabolic syndrome, such as β-cells, liver, skeletal and heart muscle as well as adipose tissue, have all been shown to be affected by miRNAs. In the pancreatic β-cell, a number of miRNAs are important in maintaining the balance between differentiation and proliferation (miR-200 and miR-29 families) and insulin exocytosis in the differentiated state is controlled by miR-7, miR-375 and miR-335. MiR-33a and MiR-33b play crucial roles in cholesterol and lipid metabolism, whereas miR-103 and miR-107 regulates hepatic insulin sensitivity. In muscle tissue, a defined number of miRNAs (miR-1, miR-133, miR-206) control myofibre type switch and induce myogenic differentiation programmes. Similarly, in adipose tissue, a defined number of miRNAs control white to brown adipocyte conversion or differentiation (miR-365, miR-133, miR-455). The discovery of circulating miRNAs in exosomes emphasizes their importance as both endocrine signalling molecules and potentially disease markers. Their dysregulation in metabolic diseases, such as obesity, type 2 diabetes and atherosclerosis stresses their potential as therapeutic targets. This review emphasizes current ideas and controversies within miRNA research in metabolism.", "title": "" }, { "docid": "neg:1840288_18", "text": "We present the science and technology roadmap for graphene, related two-dimensional crystals, and hybrid systems, targeting an evolution in technology, that might lead to impacts and benefits reaching into most areas of society. This roadmap was developed within the framework of the European Graphene Flagship and outlines the main targets and research areas as best understood at the start of this ambitious project. We provide an overview of the key aspects of graphene and related materials (GRMs), ranging from fundamental research challenges to a variety of applications in a large number of sectors, highlighting the steps necessary to take GRMs from a state of raw potential to a point where they might revolutionize multiple industries. We also define an extensive list of acronyms in an effort to standardize the nomenclature in this emerging field.", "title": "" }, { "docid": "neg:1840288_19", "text": "This paper proposes a method to achieve fast and fluid human-robot interaction by estimating the progress of the movement of the human. The method allows the progress, also referred to as the phase of the movement, to be estimated even when observations of the human are partial and occluded; a problem typically found when using motion capture systems in cluttered environments. By leveraging on the framework of Interaction Probabilistic Movement Primitives (ProMPs), phase estimation makes it possible to classify the human action, and to generate a corresponding robot trajectory before the human finishes his/her movement. The method is therefore suited for semiautonomous robots acting as assistants and coworkers. Since observations may be sparse, our method is based on computing the probability of different phase candidates to find the phase that best aligns the Interaction ProMP with the current observations. The method is fundamentally different from approaches based on Dynamic Time Warping (DTW) that must rely on a consistent stream of measurements at runtime. The phase estimation algorithm can be seamlessly integrated into Interaction ProMPs such that robot trajectory coordination, phase estimation, and action recognition can all be achieved in a single probabilistic framework. We evaluated the method using a 7-DoF lightweight robot arm equipped with a 5-finger hand in single and multi-task collaborative experiments. We compare the accuracy achieved by phase estimation with our previous method based on DTW.", "title": "" } ]
1840289
Imaging human EEG dynamics using independent component analysis
[ { "docid": "pos:1840289_0", "text": "A new measure of event-related brain dynamics, the event-related spectral perturbation (ERSP), is introduced to study event-related dynamics of the EEG spectrum induced by, but not phase-locked to, the onset of the auditory stimuli. The ERSP reveals aspects of event-related brain dynamics not contained in the ERP average of the same response epochs. Twenty-eight subjects participated in daily auditory evoked response experiments during a 4 day study of the effects of 24 h free-field exposure to intermittent trains of 89 dB low frequency tones. During evoked response testing, the same tones were presented through headphones in random order at 5 sec intervals. No significant changes in behavioral thresholds occurred during or after free-field exposure. ERSPs induced by target pips presented in some inter-tone intervals were larger than, but shared common features with, ERSPs induced by the tones, most prominently a ridge of augmented EEG amplitude from 11 to 18 Hz, peaking 1-1.5 sec after stimulus onset. Following 3-11 h of free-field exposure, this feature was significantly smaller in tone-induced ERSPs; target-induced ERSPs were not similarly affected. These results, therefore, document systematic effects of exposure to intermittent tones on EEG brain dynamics even in the absence of changes in auditory thresholds.", "title": "" } ]
[ { "docid": "neg:1840289_0", "text": "Clustering is a widely studied data mining problem in the text domains. The problem finds numerous applications in customer segmentation, classification, collaborative filtering, visualization, document organization, and indexing. In this chapter, we will provide a detailed survey of the problem of text clustering. We will study the key challenges of the clustering problem, as it applies to the text domain. We will discuss the key methods used for text clustering, and their relative advantages. We will also discuss a number of recent advances in the area in the context of social network and linked data.", "title": "" }, { "docid": "neg:1840289_1", "text": "Factorization Machines offer good performance and useful embeddings of data. However, they are costly to scale to large amounts of data and large numbers of features. In this paper we describe DiFacto, which uses a refined Factorization Machine model with sparse memory adaptive constraints and frequency adaptive regularization. We show how to distribute DiFacto over multiple machines using the Parameter Server framework by computing distributed subgradients on minibatches asynchronously. We analyze its convergence and demonstrate its efficiency in computational advertising datasets with billions examples and features.", "title": "" }, { "docid": "neg:1840289_2", "text": "Substrate integrated waveguide (SIW) is a new high Q, low loss, low cost, easy processing and integrating planar waveguide structure, which can be widely used in microwave and millimeter-wave integrated circuit. A five-elements resonant slot array antenna at 35GHz has been designed in this paper with a bandwidth of 500MHz (S11<;-15dB), gain of 11.5dB and sidelobe level (SLL) of -23.5dB (using Taylor weighted), which has a small size, low cost and is easy to integrate, etc.", "title": "" }, { "docid": "neg:1840289_3", "text": "Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.", "title": "" }, { "docid": "neg:1840289_4", "text": "As organizations start to use data-intensive cluster computing systems like Hadoop and Dryad for more applications, there is a growing need to share clusters between users. However, there is a conflict between fairness in scheduling and data locality (placing tasks on nodes that contain their input data). We illustrate this problem through our experience designing a fair scheduler for a 600-node Hadoop cluster at Facebook. To address the conflict between locality and fairness, we propose a simple algorithm called delay scheduling: when the job that should be scheduled next according to fairness cannot launch a local task, it waits for a small amount of time, letting other jobs launch tasks instead. We find that delay scheduling achieves nearly optimal data locality in a variety of workloads and can increase throughput by up to 2x while preserving fairness. In addition, the simplicity of delay scheduling makes it applicable under a wide variety of scheduling policies beyond fair sharing.", "title": "" }, { "docid": "neg:1840289_5", "text": "The injection of a high-frequency signal in the stator via inverter has been shown to be a viable option to estimate the magnet temperature in permanent-magnet synchronous machines (PMSMs). The variation of the magnet resistance with temperature is reflected in the stator high-frequency resistance, which can be measured from the resulting current when a high-frequency voltage is injected. However, this method is sensitive to d- and q-axis inductance (Ld and Lq) variations, as well as to the machine speed. In addition, it is only suitable for surface PMSMs (SPMSMs) and inadequate for interior PMSMs (IPMSMs). In this paper, the use of a pulsating high-frequency current injection in the d-axis of the machine for temperature estimation purposes is proposed. The proposed method will be shown to be insensitive to the speed, Lq, and Ld variations. Furthermore, it can be used with both SPMSMs and IPMSMs.", "title": "" }, { "docid": "neg:1840289_6", "text": "We propose a Bayesian optimization algorithm for objective functions that are sums or integrals of expensive-to-evaluate functions, allowing noisy evaluations. These objective functions arise in multi-task Bayesian optimization for tuning machine learning hyperparameters, optimization via simulation, and sequential design of experiments with random environmental conditions. Our method is average-case optimal by construction when a single evaluation of the integrand remains within our evaluation budget. Achieving this one-step optimality requires solving a challenging value of information optimization problem, for which we provide a novel efficient discretization-free computational method. We also provide consistency proofs for our method in both continuum and discrete finite domains for objective functions that are sums. In numerical experiments comparing against previous state-of-the-art methods, including those that also leverage sum or integral structure, our method performs as well or better across a wide range of problems and offers significant improvements when evaluations are noisy or the integrand varies smoothly in the integrated variables.", "title": "" }, { "docid": "neg:1840289_7", "text": "Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of “one-shot learning.” Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory locationbased focusing mechanisms.", "title": "" }, { "docid": "neg:1840289_8", "text": "Allocentric space is mapped by a widespread brain circuit of functionally specialized cell types located in interconnected subregions of the hippocampal-parahippocampal cortices. Little is known about the neural architectures required to express this variety of firing patterns. In rats, we found that one of the cell types, the grid cell, was abundant not only in medial entorhinal cortex (MEC), where it was first reported, but also in pre- and parasubiculum. The proportion of grid cells in pre- and parasubiculum was comparable to deep layers of MEC. The symmetry of the grid pattern and its relationship to the theta rhythm were weaker, especially in presubiculum. Pre- and parasubicular grid cells intermingled with head-direction cells and border cells, as in deep MEC layers. The characterization of a common pool of space-responsive cells in architecturally diverse subdivisions of parahippocampal cortex constrains the range of mechanisms that might give rise to their unique functional discharge phenotypes.", "title": "" }, { "docid": "neg:1840289_9", "text": "BACKGROUND\nNeuropathic pain is one of the most devastating kinds of chronic pain. Neuroinflammation has been shown to contribute to the development of neuropathic pain. We have previously demonstrated that lumbar spinal cord-infiltrating CD4+ T lymphocytes contribute to the maintenance of mechanical hypersensitivity in spinal nerve L5 transection (L5Tx), a murine model of neuropathic pain. Here, we further examined the phenotype of the CD4+ T lymphocytes involved in the maintenance of neuropathic pain-like behavior via intracellular flow cytometric analysis and explored potential interactions between infiltrating CD4+ T lymphocytes and spinal cord glial cells.\n\n\nRESULTS\nWe consistently observed significantly higher numbers of T-Bet+, IFN-γ+, TNF-α+, and GM-CSF+, but not GATA3+ or IL-4+, lumbar spinal cord-infiltrating CD4+ T lymphocytes in the L5Tx group compared to the sham group at day 7 post-L5Tx. This suggests that the infiltrating CD4+ T lymphocytes expressed a pro-inflammatory type 1 phenotype (Th1). Despite the observation of CD4+ CD40 ligand (CD154)+ T lymphocytes in the lumbar spinal cord post-L5Tx, CD154 knockout (KO) mice did not display significant changes in L5Tx-induced mechanical hypersensitivity, indicating that T lymphocyte-microglial interaction through the CD154-CD40 pathway is not necessary for L5Tx-induced hypersensitivity. In addition, spinal cord astrocytic activation, represented by glial fibillary acidic protein (GFAP) expression, was significantly lower in CD4 KO mice compared to wild type (WT) mice at day 14 post-L5Tx, suggesting the involvement of astrocytes in the pronociceptive effects mediated by infiltrating CD4+ T lymphocytes.\n\n\nCONCLUSIONS\nIn all, these data indicate that the maintenance of L5Tx-induced neuropathic pain is mostly mediated by Th1 cells in a CD154-independent manner via a mechanism that could involve multiple Th1 cytokines and astrocytic activation.", "title": "" }, { "docid": "neg:1840289_10", "text": "MOTIVATION\nMetabolomics is a post genomic technology which seeks to provide a comprehensive profile of all the metabolites present in a biological sample. This complements the mRNA profiles provided by microarrays, and the protein profiles provided by proteomics. To test the power of metabolome analysis we selected the problem of discrimating between related genotypes of Arabidopsis. Specifically, the problem tackled was to discrimate between two background genotypes (Col0 and C24) and, more significantly, the offspring produced by the crossbreeding of these two lines, the progeny (whose genotypes would differ only in their maternally inherited mitichondia and chloroplasts).\n\n\nOVERVIEW\nA gas chromotography--mass spectrometry (GCMS) profiling protocol was used to identify 433 metabolites in the samples. The metabolomic profiles were compared using descriptive statistics which indicated that key primary metabolites vary more than other metabolites. We then applied neural networks to discriminate between the genotypes. This showed clearly that the two background lines can be discrimated between each other and their progeny, and indicated that the two progeny lines can also be discriminated. We applied Euclidean hierarchical and Principal Component Analysis (PCA) to help understand the basis of genotype discrimination. PCA indicated that malic acid and citrate are the two most important metabolites for discriminating between the background lines, and glucose and fructose are two most important metabolites for discriminating between the crosses. These results are consistant with genotype differences in mitochondia and chloroplasts.", "title": "" }, { "docid": "neg:1840289_11", "text": "The quantum of power that a given EHVAC transmission line can safely carry depends on various limits. These limits can be categorized into two types viz. thermal and stability/SIL limits. In case of long lines the capacity is limited by its SIL level only which is much below its thermal capacity due to large inductance. Decrease in line inductance and surge impedance shall increase the SIL and transmission capacity. This paper presents a mathematical model of increasing the SIL level towards thermal limit. Sensitivity of SIL on various configuration of sub-conductors in a bundle, bundle spacing, tower structure, spacing of phase conductors etc. is analyzed and presented. Various issues that need attention for application of high surge impedance loading (HSIL) line are also deliberated", "title": "" }, { "docid": "neg:1840289_12", "text": "This paper presents a quantitative performance analysis of a conventional passive cell balancing method and a proposed active cell balancing method for automotive batteries. The proposed active cell balancing method was designed to perform continuous cell balancing during charge and discharge with high balancing current. An experimentally validated model was used to simulate the balancing process of both balancing circuits for a high capacity battery module. The results suggest that the proposed method can improve the power loss and extend the discharge time of a battery module. Hence, a higher energy output can be yielded.", "title": "" }, { "docid": "neg:1840289_13", "text": "In this paper we address the question of how closely everyday human teachers match a theoretically optimal teacher. We present two experiments in which subjects teach a concept to our robot in a supervised fashion. In the first experiment we give subjects no instructions on teaching and observe how they teach naturally as compared to an optimal strategy. We find that people are suboptimal in several dimensions. In the second experiment we try to elicit the optimal teaching strategy. People can teach much faster using the optimal teaching strategy, however certain parts of the strategy are more intuitive than others.", "title": "" }, { "docid": "neg:1840289_14", "text": "Marketing has been criticised from all spheres today since the real worth of all the marketing efforts can hardly be precisely determined. Today consumers are better informed and also misinformed at times due to the bombardment of various pieces of information through a new type of interactive media, i.e., social media (SM). In SM, communication is through dialogue channels wherein consumers pay more attention to SM buzz rather than promotions of marketers. The various forms of SM create a complex set of online social networks (OSN), through which word-of-mouth (WOM) propagates and influence consumer decisions. With the growth of OSN and user generated contents (UGC), WOM metamorphoses to electronic word-of-mouth (eWOM), which spreads in astronomical proportions. Previous works study the effect of external and internal influences in affecting consumer behaviour. However, today the need is to resort to multidisciplinary approaches to find out how SM influence consumers with eWOM and online reviews. This paper reviews the emerging trend of how multiple disciplines viz. Statistics, Data Mining techniques, Network Analysis, etc. are being integrated by marketers today to analyse eWOM and derive actionable intelligence.", "title": "" }, { "docid": "neg:1840289_15", "text": "Object tracking under complex circumstances is a challenging task because of background interference, obstacle occlusion, object deformation, etc. Given such conditions, robustly detecting, locating, and analyzing a target through single-feature representation are difficult tasks. Global features, such as color, are widely used in tracking, but may cause the object to drift under complex circumstances. Local features, such as HOG and SIFT, can precisely represent rigid targets, but these features lack the robustness of an object in motion. An effective method is adaptive fusion of multiple features in representing targets. The process of adaptively fusing different features is the key to robust object tracking. This study uses a multi-feature joint descriptor (MFJD) and the distance between joint histograms to measure the similarity between a target and its candidate patches. Color and HOG features are fused as the tracked object of the joint representation. This study also proposes a self-adaptive multi-feature fusion strategy that can adaptively adjust the joint weight of the fused features based on their stability and contrast measure scores. The mean shift process is adopted as the object tracking framework with multi-feature representation. The experimental results demonstrate that the proposed MFJD tracking method effectively handles background clutter, partial occlusion by obstacles, scale changes, and deformations. The novel method performs better than several state-of-the-art methods in real surveillance scenarios.", "title": "" }, { "docid": "neg:1840289_16", "text": "In the competitive electricity structure, demand response programs enable customers to react dynamically to changes in electricity prices. The implementation of such programs may reduce energy costs and increase reliability. To fully harness such benefits, existing load controllers and appliances need around-the-clock price information. Advances in the development and deployment of advanced meter infrastructures (AMIs), building automation systems (BASs), and various dedicated embedded control systems provide the capability to effectively address this requirement. In this paper we introduce a meter gateway architecture (MGA) to serve as a foundation for integrated control of loads by energy aggregators, facility hubs, and intelligent appliances. We discuss the requirements that motivate the architecture, describe its design, and illustrate its application to a small system with an intelligent appliance and a legacy appliance using a prototype implementation of an intelligent hub for the MGA and ZigBee wireless communications.", "title": "" }, { "docid": "neg:1840289_17", "text": "Automatic Face Recognition is one of the most emphasizing dilemmas in diverse of potential relevance like in different surveillance systems, security systems, authentication or verification of individual like criminals etc. Adjoining of dynamic expression in face causes a broad range of discrepancies in recognition systems. Facial Expression not only exposes the sensation or passion of any person but can also be used to judge his/her mental views and psychosomatic aspects. This paper is based on a complete survey of face recognition conducted under varying facial expressions. In order to analyze different techniques, motion-based, model-based and muscles-based approaches have been used in order to handle the facial expression and recognition catastrophe. The analysis has been completed by evaluating various existing algorithms while comparing their results in general. It also expands the scope for other researchers for answering the question of effectively dealing with such problems.", "title": "" }, { "docid": "neg:1840289_18", "text": "Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.", "title": "" }, { "docid": "neg:1840289_19", "text": "Due to the importance of PM synchronous machine in many categories like in industrial, mechatronics, automotive, energy storage flywheel, centrifugal compressor, vacuum pump and robotic applications moreover in smart power grid applications, this paper is presented. It reviews the improvement of permanent magnet synchronous machines performance researches. This is done depending on many researchers' papers as samples for many aspects like: modelling, control, optimization and design to present a satisfied literature review", "title": "" } ]
1840290
Compositional Vector Space Models for Knowledge Base Inference
[ { "docid": "pos:1840290_0", "text": "We present a general learning-based approach for phrase-level sentiment analysis that adopts an ordinal sentiment scale and is explicitly compositional in nature. Thus, we can model the compositional effects required for accurate assignment of phrase-level sentiment. For example, combining an adverb (e.g., “very”) with a positive polar adjective (e.g., “good”) produces a phrase (“very good”) with increased polarity over the adjective alone. Inspired by recent work on distributional approaches to compositionality, we model each word as a matrix and combine words using iterated matrix multiplication, which allows for the modeling of both additive and multiplicative semantic effects. Although the multiplication-based matrix-space framework has been shown to be a theoretically elegant way to model composition (Rudolph and Giesbrecht, 2010), training such models has to be done carefully: the optimization is nonconvex and requires a good initial starting point. This paper presents the first such algorithm for learning a matrix-space model for semantic composition. In the context of the phrase-level sentiment analysis task, our experimental results show statistically significant improvements in performance over a bagof-words model.", "title": "" }, { "docid": "pos:1840290_1", "text": "Open Domain: There are nearly an unbounded number of classes, objects and relations Missing Data: Many useful facts are never explicitly stated No Negative Examples: Labeling positive and negative examples for all interesting relations is impractical Learning First-Order Horn Clauses from Web Text Stefan Schoenmackers Oren Etzioni Daniel S. Weld Jesse Davis Turing Center, University of Washington Katholieke Universiteit Leuven", "title": "" } ]
[ { "docid": "neg:1840290_0", "text": "In the era of Social Computing, the role of customer reviews and ratings can be instrumental in predicting the success and sustainability of businesses as customers and even competitors use them to judge the quality of a business. Yelp is one of the most popular websites for users to write such reviews. This rating can be subjective and biased toward user's personality. Business preferences of a user can be decrypted based on his/ her past reviews. In this paper, we deal with (i) uncovering latent topics in Yelp data based on positive and negative reviews using topic modeling to learn which topics are the most frequent among customer reviews, (ii) sentiment analysis of users' reviews to learn how these topics associate to a positive or negative rating which will help businesses improve their offers and services, and (iii) predicting unbiased ratings from user-generated review text alone, using Linear Regression model. We also perform data analysis to get some deeper insights into customer reviews.", "title": "" }, { "docid": "neg:1840290_1", "text": "Longitudinal melanonychia presents in various conditions including neoplastic and reactive disorders. It is much more frequently seen in non-Caucasians than Caucasians. While most cases of nail apparatus melanoma start as longitudinal melanonychia, melanocytic nevi of the nail apparatus also typically accompany longitudinal melanonychia. Identifying the suspicious longitudinal melanonychia is therefore an important task for dermatologists. Dermoscopy provides useful information for making this decision. The most suspicious dermoscopic feature of early nail apparatus melanoma is irregular lines on a brown background. Evaluation of the irregularity may be rather subjective, but through experience, dermatologists can improve their diagnostic skills of longitudinal melanonychia, including benign conditions showing regular lines. Other important dermoscopic features of early nail apparatus melanoma are micro-Hutchinson's sign, a wide pigmented band, and triangular pigmentation on the nail plate. Although there is as yet no solid evidence concerning the frequency of dermoscopic follow up, we recommend checking the suspicious longitudinal melanonychia every 6 months. Moreover, patients with longitudinal melanonychia should be asked to return to the clinic quickly if the lesion shows obvious changes. Diagnosis of amelanotic or hypomelanotic melanoma affecting the nail apparatus is also challenging, but melanoma should be highly suspected if remnants of melanin granules are detected dermoscopically.", "title": "" }, { "docid": "neg:1840290_2", "text": "In this paper, the study and implementation of a high frequency pulse LED driver with self-oscillating circuit is presented. The self-oscillating half-bridge series resonant inverter is adopted in this LED driver and the circuit characteristics of LED with high frequency pulse driving voltage is also discussed. LED module is connected with full bridge diode rectifier but without low pass filter and this LED module is driven with high frequency pulse. In additional, the self-oscillating resonant circuit with saturable core is used to achieve zero voltage switching and to control the LED current. The LED equivalent circuit of resonant circuit and the operating principle of the self-oscillating half-bridge inverter are discussed in detail. Finally, an 18 W high frequency pulse LED driver is implemented to verify the feasibility. Experimental results show that the circuit efficiency is over 86.5% when input voltage operating within AC 110 ± 10 Vrms and the maximum circuit efficiency is up to 89.2%.", "title": "" }, { "docid": "neg:1840290_3", "text": "Understanding large-scale document collections in an efficient manner is an important problem. Usually, document data are associated with other information (e.g., an author's gender, age, and location) and their links to other entities (e.g., co-authorship and citation networks). For the analysis of such data, we often have to reveal common as well as discriminative characteristics of documents with respect to their associated information, e.g., male- vs. female-authored documents, old vs. new documents, etc. To address such needs, this paper presents a novel topic modeling method based on joint nonnegative matrix factorization, which simultaneously discovers common as well as discriminative topics given multiple document sets. Our approach is based on a block-coordinate descent framework and is capable of utilizing only the most representative, thus meaningful, keywords in each topic through a novel pseudo-deflation approach. We perform both quantitative and qualitative evaluations using synthetic as well as real-world document data sets such as research paper collections and nonprofit micro-finance data. We show our method has a great potential for providing in-depth analyses by clearly identifying common and discriminative topics among multiple document sets.", "title": "" }, { "docid": "neg:1840290_4", "text": "Machine learning is currently dominated by largely experimental work focused on improvements in a few key tasks. However, the impressive accuracy numbers of the best performing models are questionable because the same test sets have been used to select these models for multiple years now. To understand the danger of overfitting, we measure the accuracy of CIFAR-10 classifiers by creating a new test set of truly unseen images. Although we ensure that the new test set is as close to the original data distribution as possible, we find a large drop in accuracy (4% to 10%) for a broad range of deep learning models. Yet, more recent models with higher original accuracy show a smaller drop and better overall performance, indicating that this drop is likely not due to overfitting based on adaptivity. Instead, we view our results as evidence that current accuracy numbers are brittle and susceptible to even minute natural variations in the data distribution.", "title": "" }, { "docid": "neg:1840290_5", "text": "BACKGROUND\nTo compare simultaneous recordings from an external patch system specifically designed to ensure better P-wave recordings and standard Holter monitor to determine diagnostic efficacy. Holter monitors are a mainstay of clinical practice, but are cumbersome to access and wear and P-wave signal quality is frequently inadequate.\n\n\nMETHODS\nThis study compared the diagnostic efficacy of the P-wave centric electrocardiogram (ECG) patch (Carnation Ambulatory Monitor) to standard 3-channel (leads V1, II, and V5) Holter monitor (Northeast Monitoring, Maynard, MA). Patients were referred to a hospital Holter clinic for standard clinical indications. Each patient wore both devices simultaneously and served as their own control. Holter and Patch reports were read in a blinded fashion by experienced electrophysiologists unaware of the findings in the other corresponding ECG recording. All patients, technicians, and physicians completed a questionnaire on comfort and ease of use, and potential complications.\n\n\nRESULTS\nIn all 50 patients, the P-wave centric patch recording system identified rhythms in 23 patients (46%) that altered management, compared to 6 Holter patients (12%), P<.001. The patch ECG intervals PR, QRS and QT correlated well with the Holter ECG intervals having correlation coefficients of 0.93, 0.86, and 0.94, respectively. Finally, 48 patients (96%) preferred wearing the patch monitor.\n\n\nCONCLUSIONS\nA single-channel ambulatory patch ECG monitor, designed specifically to ensure that the P-wave component of the ECG be visible, resulted in a significantly improved rhythm diagnosis and avoided inaccurate diagnoses made by the standard 3-channel Holter monitor.", "title": "" }, { "docid": "neg:1840290_6", "text": "Named entity recognition is a crucial component of biomedical natural language processing, enabling information extraction and ultimately reasoning over and knowledge discovery from text. Much progress has been made in the design of rule-based and supervised tools, but they are often genre and task dependent. As such, adapting them to different genres of text or identifying new types of entities requires major effort in re-annotation or rule development. In this paper, we propose an unsupervised approach to extracting named entities from biomedical text. We describe a stepwise solution to tackle the challenges of entity boundary detection and entity type classification without relying on any handcrafted rules, heuristics, or annotated data. A noun phrase chunker followed by a filter based on inverse document frequency extracts candidate entities from free text. Classification of candidate entities into categories of interest is carried out by leveraging principles from distributional semantics. Experiments show that our system, especially the entity classification step, yields competitive results on two popular biomedical datasets of clinical notes and biological literature, and outperforms a baseline dictionary match approach. Detailed error analysis provides a road map for future work.", "title": "" }, { "docid": "neg:1840290_7", "text": "0167-8655/$ see front matter 2009 Elsevier B.V. A doi:10.1016/j.patrec.2009.06.012 * Corresponding author. Tel.: +82 2 705 8931; fax: E-mail addresses: sjyoon@sogang.ac.kr (S. Yoon), sa Computer aided diagnosis (CADx) systems for digitized mammograms solve the problem of classification between benign and malignant tissues while studies have shown that using only a subset of features generated from the mammograms can yield higher classification accuracy. To this end, we propose a mutual information-based Support Vector Machine Recursive Feature Elimination (SVM-RFE) as the classification method with feature selection in this paper. We have conducted extensive experiments on publicly available mammographic data and the obtained results indicate that the proposed method outperforms other SVM and SVM-RFE-based methods. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "neg:1840290_8", "text": "The impact of social networks in customer buying decisions is rapidly increasing, because they are effective in shaping public opinion. This paper helps marketers analyze a social network’s members based on different characteristics as well as choose the best method for identifying influential people among them. Marketers can then use these influential people as seeds for market products/services. Considering the importance of opinion leadership in social networks, the authors provide a comprehensive overview of existing literature. Studies show that different titles (such as opinion leaders, influential people, market mavens, and key players) are used to refer to the influential group in social networks. In this paper, all the properties presented for opinion leaders in the form of different titles are classified into three general categories, including structural, relational, and personal characteristics. Furthermore, based on studying opinion leader identification methods, appropriate parameters are extracted in a comprehensive chart to evaluate and compare these methods accurately. based marketing, word-of-mouth marketing has more creditability (Li & Du, 2011), because there is no direct link between the sender and the merchant. As a result, information is considered independent and subjective. In recent years, many researches in word-of-mouth marketing investigate discovering influential nodes in a social network. These influential people are called opinion leaders in the literature. Organizations interested in e-commerce need to identify opinion leaders among their customers, also the place (web site) which they are going online. This is the place they can market their products. DOI: 10.4018/jvcsn.2011010105 44 International Journal of Virtual Communities and Social Networking, 3(1), 43-59, January-March 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. Social Network Analysis Regarding the importance of interpersonal relationship, studies are looking for formal methods to measures who talks to whom in a community. These methods are known as social network analysis (Scott, 1991; Wasserman & Faust, 1994; Rogers & Kincaid, 1981; Valente & Davis, 1999). Social network analysis includes the study of the interpersonal relationships. It usually is more focused on the network itself, rather than on the attributes of the members (Li & Du, 2011). Valente and Rogers (1995) have described social network analysis from the point of view of interpersonal communication by “formal methods of measuring who talks to whom within a community”. Social network analysis enables researchers to identify people who are more central in the network and so more influential. By using these central people or opinion leaders as seeds diffusion of a new product or service can be accelerated (Katz & Lazarsfeld, 1955; Valente & Davis, 1999). Importance of Social Networks for Marketing The importance of social networks as a marketing tool is increasing, and it includes diverse areas (Even-Dar & Shapirab, 2011). Analysis of interdependencies between customers can improve targeted marketing as well as help organization in acquisition of new customers who are not detectable by traditional techniques. By recent technological developments social networks are not limited in face-to-face and physical relationships. Furthermore, online social networks have become a new medium for word-of-mouth marketing. Although the face-to-face word-of-mouth has a greater impact on consumer purchasing decisions over printed information because of its vividness and credibility, in recent years with the growth of the Internet and virtual communities the written word-of-mouth (word-of-mouse) has been created in the online channels (Mak, 2008). Consider a company that wants to launch a new product. This company can benefit from popular social networks like Facebook and Myspace rather than using classical advertising channels. Then, convincing several key persons in each network to adopt the new product, can help a company to exploit an effective diffusion in the network through word-of-mouth. According to Nielsen’s survey of more than 26,000 internet uses, 78% of respondents exhibited recommendations from others are the most trusted source when considering a product or service (Nielsen, 2007). Based on another study conducted by Deloitte’s Consumer Products group, almost 62% of consumers who read consumer-written product reviews online declare their purchase decisions have been directly influenced by the user reviews (Delottie, 2007). Empirical studies have demonstrated that new ideas and practices spread through interpersonal communication (Valente & Rogers, 1995; Valente & Davis, 1999; Valente, 1995). Hawkins et al. (1995) suggest that companies can use four possible courses of action, including marketing research, product sampling, retailing/personal selling and advertising to use their knowledge of opinion leaders to their advantage. The authors of this paper in a similar study have done a review of related literature using social networks for improving marketing response. They discuss the benefits and challenges of utilizing interpersonal relationships in a network as well as opinion leader identification; also, a three step process to show how firms can apply social networks for their marketing activities has been proposed (Jafari Momtaz et al., 2011). While applications of opinion leadership in business and marketing have been widely studied, it generally deals with the development of measurement scale (Burt, 1999), its importance in the social sciences (Flynn et al., 1994), and its application to various areas related to the marketing, such as the health care industry, political science (Burt, 1999) and public communications (Howard et al., 2000; Locock et al., 2001). In this paper, a comprehensive review of studies in the field of opinion leadership and employing social networks to improve the marketing response is done. In the next sec15 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/identifying-opinion-leadersmarketing-analyzing/60541?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Communications and Social Science, InfoSciCommunications, Online Engagement, and Media eJournal Collection. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2", "title": "" }, { "docid": "neg:1840290_9", "text": "Drawing on Bronfenbrenner’s ecological theory and prior empirical research, the current study examines the way that blogging and social networking may impact feelings of connection and social support, which in turn could impact maternal well-being (e.g., marital functioning, parenting stress, and depression). One hundred and fifty-seven new mothers reported on their media use and various well-being variables. On average, mothers were 27 years old (SD = 5.15) and infants were 7.90 months old (SD = 5.21). All mothers had access to the Internet in their home. New mothers spent approximately 3 hours on the computer each day, with most of this time spent on the Internet. Findings suggested that frequency of blogging predicted feelings of connection to extended family and friends which then predicted perceptions of social support. This in turn predicted maternal well-being, as measured by marital satisfaction, couple conflict, parenting stress, and depression. In sum, blogging may improve new mothers’ well-being, as they feel more connected to the world outside their home through the Internet.", "title": "" }, { "docid": "neg:1840290_10", "text": "Prediction of popularity has profound impact for social media, since it offers opportunities to reveal individual preference and public attention from evolutionary social systems. Previous research, although achieves promising results, neglects one distinctive characteristic of social data, i.e., sequentiality. For example, the popularity of online content is generated over time with sequential post streams of social media. To investigate the sequential prediction of popularity, we propose a novel prediction framework called Deep Temporal Context Networks (DTCN) by incorporating both temporal context and temporal attention into account. Our DTCN contains three main components, from embedding, learning to predicting. With a joint embedding network, we obtain a unified deep representation of multi-modal user-post data in a common embedding space. Then, based on the embedded data sequence over time, temporal context learning attempts to recurrently learn two adaptive temporal contexts for sequential popularity. Finally, a novel temporal attention is designed to predict new popularity (the popularity of a new userpost pair) with temporal coherence across multiple time-scales. Experiments on our released image dataset with about 600K Flickr photos demonstrate that DTCN outperforms state-of-the-art deep prediction algorithms, with an average of 21.51% relative performance improvement in the popularity prediction (Spearman Ranking Correlation).", "title": "" }, { "docid": "neg:1840290_11", "text": "Cognitive NLP systemsi.e., NLP systems that make use of behavioral data augment traditional text-based features with cognitive features extracted from eye-movement patterns, EEG signals, brain-imaging etc.. Such extraction of features is typically manual. We contend that manual extraction of features may not be the best way to tackle text subtleties that characteristically prevail in complex classification tasks like sentiment analysis and sarcasm detection, and that even the extraction and choice of features should be delegated to the learning system. We introduce a framework to automatically extract cognitive features from the eye-movement / gaze data of human readers reading the text and use them as features along with textual features for the tasks of sentiment polarity and sarcasm detection. Our proposed framework is based on Convolutional Neural Network (CNN). The CNN learns features from both gaze and text and uses them to classify the input text. We test our technique on published sentiment and sarcasm labeled datasets, enriched with gaze information, to show that using a combination of automatically learned text and gaze features often yields better classification performance over (i) CNN based systems that rely on text input alone and (ii) existing systems that rely on handcrafted gaze and textual features.", "title": "" }, { "docid": "neg:1840290_12", "text": "Bidirectional path tracing (BDPT) can render highly realistic scenes with complicated lighting scenarios. The Light Vertex Cache (LVC) based BDPT method by Davidovic et al. [Davidovič et al. 2014] provided good performance on scenes with simple materials in a progressive rendering scenario. In this paper, we propose a new bidirectional path tracing formulation based on the LVC approach that handles scenes with complex, layered materials efficiently on the GPU. We achieve coherent material evaluation while conserving GPU memory requirements using sorting. We propose a modified method for selecting light vertices using the contribution importance which improves the image quality for a given amount of work. Progressive rendering can empower artists in the production pipeline to iterate and preview their work quickly. We hope the work presented here will enable the use of GPUs in the production pipeline with complex materials and complicated lighting scenarios.", "title": "" }, { "docid": "neg:1840290_13", "text": "Real estate appraisal, which is the process of estimating the price for real estate properties, is crucial for both buyers and sellers as the basis for negotiation and transaction. Traditionally, the repeat sales model has been widely adopted to estimate real estate prices. However, it depends on the design and calculation of a complex economic-related index, which is challenging to estimate accurately. Today, real estate brokers provide easy access to detailed online information on real estate properties to their clients. We are interested in estimating the real estate price from these large amounts of easily accessed data. In particular, we analyze the prediction power of online house pictures, which is one of the key factors for online users to make a potential visiting decision. The development of robust computer vision algorithms makes the analysis of visual content possible. In this paper, we employ a recurrent neural network to predict real estate prices using the state-of-the-art visual features. The experimental results indicate that our model outperforms several other state-of-the-art baseline algorithms in terms of both mean absolute error and mean absolute percentage error.", "title": "" }, { "docid": "neg:1840290_14", "text": "Joint image filters leverage the guidance image as a prior and transfer the structural details from the guidance image to the target image for suppressing noise or enhancing spatial resolution. Existing methods either rely on various explicit filter constructions or hand-designed objective functions, thereby making it difficult to understand, improve, and accelerate these filters in a coherent framework. In this paper, we propose a learning-based approach for constructing joint filters based on Convolutional Neural Networks. In contrast to existing methods that consider only the guidance image, the proposed algorithm can selectively transfer salient structures that are consistent with both guidance and target images. We show that the model trained on a certain type of data, e.g., RGB and depth images, generalizes well to other modalities, e.g., flash/non-Flash and RGB/NIR images. We validate the effectiveness of the proposed joint filter through extensive experimental evaluations with state-of-the-art methods.", "title": "" }, { "docid": "neg:1840290_15", "text": "There have been many claims that the Internet represents a new “frictionless market.” Our research empirically analyzes the characteristics of the Internet as a channel for two categories of homogeneous products — books and CDs. Using a data set of over 8,500 price observations collected over a period of 15 months, we compare pricing behavior at 41 Internet and conventional retail outlets. We find that prices on the Internet are 9-16% lower than prices in conventional outlets, depending on whether taxes, shipping and shopping costs are included in the price. Additionally, we find that Internet retailers’ price adjustments over time are up to 100 times smaller than conventional retailers’ price adjustments — presumably reflecting lower menu costs in Internet channels. We also find that levels of price dispersion depend importantly on the measures employed. When we simply compare the prices posted by different Internet retailers we find substantial dispersion. Internet retailer prices differ by an average of 33% for books and 25% for CDs. However, when we weight these prices by proxies for market share, we find dispersion is lower in Internet channels than in conventional channels, reflecting the dominance of certain heavily branded retailers. We conclude that while there is lower friction in many dimensions of Internet competition, branding, awareness, and trust remain important sources of heterogeneity among Internet retailers.", "title": "" }, { "docid": "neg:1840290_16", "text": "Dietary restriction has been shown to have several health benefits including increased insulin sensitivity, stress resistance, reduced morbidity, and increased life span. The mechanism remains unknown, but the need for a long-term reduction in caloric intake to achieve these benefits has been assumed. We report that when C57BL6 mice are maintained on an intermittent fasting (alternate-day fasting) dietary-restriction regimen their overall food intake is not decreased and their body weight is maintained. Nevertheless, intermittent fasting resulted in beneficial effects that met or exceeded those of caloric restriction including reduced serum glucose and insulin levels and increased resistance of neurons in the brain to excitotoxic stress. Intermittent fasting therefore has beneficial effects on glucose regulation and neuronal resistance to injury in these mice that are independent of caloric intake.", "title": "" }, { "docid": "neg:1840290_17", "text": "Despite great progress in neuroscience, there are still fundamental unanswered questions about the brain, including the origin of subjective experience and consciousness. Some answers might rely on new physical mechanisms. Given that biophotons have been discovered in the brain, it is interesting to explore if neurons use photonic communication in addition to the well-studied electro-chemical signals. Such photonic communication in the brain would require waveguides. Here we review recent work (S. Kumar, K. Boone, J. Tuszynski, P. Barclay, and C. Simon, Scientific Reports 6, 36508 (2016)) suggesting that myelinated axons could serve as photonic waveguides. The light transmission in the myelinated axon was modeled, taking into account its realistic imperfections, and experiments were proposed both in vivo and in vitro to test this hypothesis. Potential implications for quantum biology are discussed.", "title": "" }, { "docid": "neg:1840290_18", "text": "The aim of this study was to develop a tool to measure the knowledge of nurses on pressure ulcer prevention. PUKAT 2·0 is a revised and updated version of the Pressure Ulcer Knowledge Assessment Tool (PUKAT) developed in 2010 at Ghent University, Belgium. The updated version was developed using state-of-the-art techniques to establish evidence concerning validity and reliability. Face and content validity were determined through a Delphi procedure including both experts from the European Pressure Ulcer Advisory Panel (EPUAP) and the National Pressure Ulcer Advisory Panel (NPUAP) (n = 15). A subsequent psychometric evaluation of 342 nurses and nursing students evaluated the item difficulty, discriminating power and quality of the response alternatives. Furthermore, construct validity was established through a test-retest procedure and the known-groups technique. The content validity was good and the difficulty level moderate. The discernment was found to be excellent: all groups with a (theoretically expected) higher level of expertise had a significantly higher score than the groups with a (theoretically expected) lower level of expertise. The stability of the tool is sufficient (Intraclass Correlation Coefficient = 0·69). The PUKAT 2·0 demonstrated good psychometric properties and can be used and disseminated internationally to assess knowledge about pressure ulcer prevention.", "title": "" }, { "docid": "neg:1840290_19", "text": "Article history: Received 20 February 2013 Received in revised form 30 July 2013 Accepted 11 September 2013 Available online 21 September 2013", "title": "" } ]
1840291
A review of methods for automatic understanding of natural language mathematical problems
[ { "docid": "pos:1840291_0", "text": "Recent experiments in programming natural language question-answering systems are reviewed to summarize the methods that have been developed for syntactic, semantic, and logical analysis of English strings. It is concluded that at least minimally effective techniques have been devised for answering questions from natural language subsets in small scale experimental systems and that a useful paradigm has evolved to guide research efforts in the field. Current approaches to semantic analysis and logical inference are seen to be effective beginnings but of questionable generality with respect either to subtle aspects of meaning or to applications over large subsets of English. Generalizing from current small-scale experiments to language-processing systems based on dictionaries with thousands of entries—with correspondingly large grammars and semantic systems—may entail a new order of complexity and require the invention and development of entirely different approaches to semantic analysis and question answering.", "title": "" } ]
[ { "docid": "neg:1840291_0", "text": "The purpose of this study was to investigate the effects of training muscle groups 1 day per week using a split-body routine (SPLIT) vs. 3 days per week using a total-body routine (TOTAL) on muscular adaptations in well-trained men. Subjects were 20 male volunteers (height = 1.76 ± 0.05 m; body mass = 78.0 ± 10.7 kg; age = 23.5 ± 2.9 years) recruited from a university population. Participants were pair matched according to baseline strength and then randomly assigned to 1 of the 2 experimental groups: a SPLIT, where multiple exercises were performed for a specific muscle group in a session with 2-3 muscle groups trained per session (n = 10) or a TOTAL, where 1 exercise was performed per muscle group in a session with all muscle groups trained in each session (n = 10). Subjects were tested pre- and poststudy for 1 repetition maximum strength in the bench press and squat, and muscle thickness (MT) of forearm flexors, forearm extensors, and vastus lateralis. Results showed significantly greater increases in forearm flexor MT for TOTAL compared with SPLIT. No significant differences were noted in maximal strength measures. The findings suggest a potentially superior hypertrophic benefit to higher weekly resistance training frequencies.", "title": "" }, { "docid": "neg:1840291_1", "text": "Fingerprint matching systems generally use four types of representation schemes: grayscale image, phase image, skeleton image, and minutiae, among which minutiae-based representation is the most widely adopted one. The compactness of minutiae representation has created an impression that the minutiae template does not contain sufficient information to allow the reconstruction of the original grayscale fingerprint image. This belief has now been shown to be false; several algorithms have been proposed that can reconstruct fingerprint images from minutiae templates. These techniques try to either reconstruct the skeleton image, which is then converted into the grayscale image, or reconstruct the grayscale image directly from the minutiae template. However, they have a common drawback: Many spurious minutiae not included in the original minutiae template are generated in the reconstructed image. Moreover, some of these reconstruction techniques can only generate a partial fingerprint. In this paper, a novel fingerprint reconstruction algorithm is proposed to reconstruct the phase image, which is then converted into the grayscale image. The proposed reconstruction algorithm not only gives the whole fingerprint, but the reconstructed fingerprint contains very few spurious minutiae. Specifically, a fingerprint image is represented as a phase image which consists of the continuous phase and the spiral phase (which corresponds to minutiae). An algorithm is proposed to reconstruct the continuous phase from minutiae. The proposed reconstruction algorithm has been evaluated with respect to the success rates of type-I attack (match the reconstructed fingerprint against the original fingerprint) and type-II attack (match the reconstructed fingerprint against different impressions of the original fingerprint) using a commercial fingerprint recognition system. Given the reconstructed image from our algorithm, we show that both types of attacks can be successfully launched against a fingerprint recognition system.", "title": "" }, { "docid": "neg:1840291_2", "text": "In this study, we found that the optimum take-off angle for a long jumper may be predicted by combining the equation for the range of a projectile in free flight with the measured relations between take-off speed, take-off height and take-off angle for the athlete. The prediction method was evaluated using video measurements of three experienced male long jumpers who performed maximum-effort jumps over a wide range of take-off angles. To produce low take-off angles the athletes used a long and fast run-up, whereas higher take-off angles were produced using a progressively shorter and slower run-up. For all three athletes, the take-off speed decreased and the take-off height increased as the athlete jumped with a higher take-off angle. The calculated optimum take-off angles were in good agreement with the athletes' competition take-off angles.", "title": "" }, { "docid": "neg:1840291_3", "text": "We consider the problems of learning forward models that map state to high-dimensional images and inverse models that map high-dimensional images to state in robotics. Specifically, we present a perceptual model for generating video frames from state with deep networks, and provide a framework for its use in tracking and prediction tasks. We show that our proposed model greatly outperforms standard deconvolutional methods and GANs for image generation, producing clear, photo-realistic images. We also develop a convolutional neural network model for state estimation and compare the result to an Extended Kalman Filter to estimate robot trajectories. We validate all models on a real robotic system.", "title": "" }, { "docid": "neg:1840291_4", "text": "ABSTRACT: Nonlinear control problem for a missile autopilot is quick adaptation and minimizing the desired acceleration to missile nonlinear model. For this several missile controllers are provided which are on the basis of nonlinear control or design of linear control for the linear missile system. In this paper a linear control of dynamic matrix type is proposed for the linear model of missile. In the first section, an approximate two degrees of freedom missile model, known as Horton model, is introduced. Then, the nonlinear model is converted into observable and controllable model base on the feedback linear rule of input-state mode type. Finally for design of control model, the dynamic matrix flight control, which is one of the linear predictive control design methods on the basis of system step response information, is used. This controller is a recursive method which calculates the development of system input by definition and optimization of a cost function and using system dynamic matrix. So based on the applied inputs and previous output information, the missile acceleration would be calculated. Unlike other controllers, this controller doesn’t require an interaction effect and accurate model. Although, it has predicting and controlling horizon, there isn’t such horizons in non-predictive methods.", "title": "" }, { "docid": "neg:1840291_5", "text": "IT projects have certain features that make them different from other engineering projects. These include increased complexity and higher chances of project failure. To increase the chances of an IT project to be perceived as successful by all the parties involved in the project from its conception, development and implementation, it is necessary to identify at the outset of the project what the important factors influencing that success are. Current methodologies and tools used for identifying, classifying and evaluating the indicators of success in IT projects have several limitations that can be overcome by employing the new methodology presented in this paper. This methodology is based on using Fuzzy Cognitive Maps (FCM) for mapping success, modelling Critical Success Factors (CSFs) perceptions and the relations between them. This is an area where FCM has never been applied before. The applicability of the FCM methodology is demonstrated through a case study based on a new project idea, the Mobile Payment System (MPS) Project, related to the fast evolving world of mobile telecommunications.", "title": "" }, { "docid": "neg:1840291_6", "text": "Finding subgraph isomorphisms is an important problem in many applications which deal with data modeled as graphs. While this problem is NP-hard, in recent years, many algorithms have been proposed to solve it in a reasonable time for real datasets using different join orders, pruning rules, and auxiliary neighborhood information. However, since they have not been empirically compared one another in most research work, it is not clear whether the later work outperforms the earlier work. Another problem is that reported comparisons were often done using the original authors’ binaries which were written in different programming environments. In this paper, we address these serious problems by re-implementing five state-of-the-art subgraph isomorphism algorithms in a common code base and by comparing them using many real-world datasets and their query loads. Through our in-depth analysis of experimental results, we report surprising empirical findings.", "title": "" }, { "docid": "neg:1840291_7", "text": "This chapter focuses on the why, what, and how of bodily expression analysis for automatic affect recognition. It first asks the question of ‘why bodily expression?’ and attempts to find answers by reviewing the latest bodily expression perception literature. The chapter then turns its attention to the question of ‘what are the bodily expressions recognized automatically?’ by providing an overview of the automatic bodily expression recognition literature. The chapter then provides representative answers to how bodily expression analysis can aid affect recognition by describing three case studies: (1) data acquisition and annotation of the first publicly available database of affective face-and-body displays (i.e., the FABO database); (2) a representative approach for affective state recognition from face-and-body display by detecting the space-time interest points in video and using Canonical Correlation Analysis (CCA) for fusion, and (3) a representative approach for explicit detection of the temporal phases (segments) of affective states (start/end of the expression and its subdivision into phases such as neutral, onset, apex, and offset) from bodily expressions. The chapter concludes by summarizing the main challenges faced and discussing how we can advance the state of the art in the field.", "title": "" }, { "docid": "neg:1840291_8", "text": "The distinct protein aggregates that are found in Alzheimer's, Parkinson's, Huntington's and prion diseases seem to cause these disorders. Small intermediates — soluble oligomers — in the aggregation process can confer synaptic dysfunction, whereas large, insoluble deposits might function as reservoirs of the bioactive oligomers. These emerging concepts are exemplified by Alzheimer's disease, in which amyloid β-protein oligomers adversely affect synaptic structure and plasticity. Findings in other neurodegenerative diseases indicate that a broadly similar process of neuronal dysfunction is induced by diffusible oligomers of misfolded proteins.", "title": "" }, { "docid": "neg:1840291_9", "text": "The toxicity and repellency of the bioactive chemicals of clove (Syzygium aromaticum) powder, eugenol, eugenol acetate, and beta-caryophyllene were evaluated against workers of the red imported fire ant, Solenopsis invicta Buren. Clove powder applied at 3 and 12 mg/cm2 provided 100% ant mortality within 6 h, and repelled 99% within 3 h. Eugenol was the fastest acting compound against red imported fire ant compared with eugenol acetate, beta-caryophyllene, and clove oil. The LT50 values inclined exponentially with the increase in the application rate of the chemical compounds tested. However, repellency did not increase with the increase in the application rate of the chemical compounds tested, but did with the increase in exposure time. Eugenol, eugenol acetate, as well as beta-caryophyllene and clove oil may provide another tool for red imported fire ant integrated pest management, particularly in situations where conventional insecticides are inappropriate.", "title": "" }, { "docid": "neg:1840291_10", "text": "Design of fault tolerant systems is a popular subject in flight control system design. In particular, adaptive control approach has been successful in recovering aircraft in a wide variety of different actuator/sensor failure scenarios. However, if the aircraft goes under a severe actuator failure, control system might not be able to adapt fast enough to changes in the dynamics, which would result in performance degradation or even loss of the aircraft. Inspired by the recent success of deep learning applications, this work builds a hybrid recurren-t/convolutional neural network model to estimate adaptation parameters for aircraft dynamics under actuator/engine faults. The model is trained offline from a database of different failure scenarios. In case of an actuator/engine failure, the model identifies adaptation parameters and feeds this information to the adaptive control system, which results in significantly faster convergence of the controller coefficients. Developed control system is implemented on a nonlinear 6-DOF F-16 aircraft, and the results show that the proposed architecture is especially beneficial in severe failure scenarios.", "title": "" }, { "docid": "neg:1840291_11", "text": "We initiate the formal study of functional encryption by giving precise definitions of the concept and its security. Roughly speaking, functional encryption supports restricted secret keys that enable a key holder to learn a specific function of encrypted data, but learn nothing else about the data. For example, given an encrypted program the secret key may enable the key holder to learn the output of the program on a specific input without learning anything else about the program. We show that defining security for functional encryption is non-trivial. First, we show that a natural game-based definition is inadequate for some functionalities. We then present a natural simulation-based definition and show that it (provably) cannot be satisfied in the standard model, but can be satisfied in the random oracle model. We show how to map many existing concepts to our formalization of functional encryption and conclude with several interesting open problems in this young area. ∗Supported by NSF, MURI, and the Packard foundation. †Supported by NSF CNS-0716199, CNS-0915361, and CNS-0952692, Air Force Office of Scientific Research (AFO SR) under the MURI award for “Collaborative policies and assured information sharing” (Project PRESIDIO), Department of Homeland Security Grant 2006-CS-001-000001-02 (subaward 641), and the Alfred P. Sloan Foundation.", "title": "" }, { "docid": "neg:1840291_12", "text": "This paper discusses the active and reactive power control method for a modular multilevel converter (MMC) based grid-connected PV system. The voltage vector space analysis is performed by using average value models for the feasibility analysis of reactive power compensation (RPC). The proposed double-loop control strategy enables the PV system to handle unidirectional active power flow and bidirectional reactive power flow. Experiments have been performed on a laboratory-scaled modular multilevel PV inverter. The experimental results verify the correctness and feasibility of the proposed strategy.", "title": "" }, { "docid": "neg:1840291_13", "text": "We propose a category-independent method to produce a bag of regions and rank them, such that top-ranked regions are likely to be good segmentations of different objects. Our key objectives are completeness and diversity: Every object should have at least one good proposed region, and a diverse set should be top-ranked. Our approach is to generate a set of segmentations by performing graph cuts based on a seed region and a learned affinity function. Then, the regions are ranked using structured learning based on various cues. Our experiments on the Berkeley Segmentation Data Set and Pascal VOC 2011 demonstrate our ability to find most objects within a small bag of proposed regions.", "title": "" }, { "docid": "neg:1840291_14", "text": "This paper presents an algorithm for the detection of micro-crack defects in the multicrystalline solar cells. This detection goal is very challenging due to the presence of various types of image anomalies like dislocation clusters, grain boundaries, and other artifacts due to the spurious discontinuities in the gray levels. In this work, an algorithm featuring an improved anisotropic diffusion filter and advanced image segmentation technique is proposed. The methods and procedures are assessed using 600 electroluminescence images, comprising 313 intact and 287 defected samples. Results indicate that the methods and procedures can accurately detect micro-crack in solar cells with sensitivity, specificity, and accuracy averaging at 97%, 80%, and 88%, respectively.", "title": "" }, { "docid": "neg:1840291_15", "text": "A single-tube 5' nuclease multiplex PCR assay was developed on the ABI 7700 Sequence Detection System (TaqMan) for the detection of Neisseria meningitidis, Haemophilus influenzae, and Streptococcus pneumoniae from clinical samples of cerebrospinal fluid (CSF), plasma, serum, and whole blood. Capsular transport (ctrA), capsulation (bexA), and pneumolysin (ply) gene targets specific for N. meningitidis, H. influenzae, and S. pneumoniae, respectively, were selected. Using sequence-specific fluorescent-dye-labeled probes and continuous real-time monitoring, accumulation of amplified product was measured. Sensitivity was assessed using clinical samples (CSF, serum, plasma, and whole blood) from culture-confirmed cases for the three organisms. The respective sensitivities (as percentages) for N. meningitidis, H. influenzae, and S. pneumoniae were 88.4, 100, and 91.8. The primer sets were 100% specific for the selected culture isolates. The ctrA primers amplified meningococcal serogroups A, B, C, 29E, W135, X, Y, and Z; the ply primers amplified pneumococcal serotypes 1, 2, 3, 4, 5, 6, 7, 8, 9, 10A, 11A, 12, 14, 15B, 17F, 18C, 19, 20, 22, 23, 24, 31, and 33; and the bexA primers amplified H. influenzae types b and c. Coamplification of two target genes without a loss of sensitivity was demonstrated. The multiplex assay was then used to test a large number (n = 4,113) of culture-negative samples for the three pathogens. Cases of meningococcal, H. influenzae, and pneumococcal disease that had not previously been confirmed by culture were identified with this assay. The ctrA primer set used in the multiplex PCR was found to be more sensitive (P < 0.0001) than the ctrA primers that had been used for meningococcal PCR testing at that time.", "title": "" }, { "docid": "neg:1840291_16", "text": "The success of deep neural networks (DNNs) is heavily dependent on the availability of labeled data. However, obtaining labeled data is a big challenge in many real-world problems. In such scenarios, a DNN model can leverage labeled and unlabeled data from a related domain, but it has to deal with the shift in data distributions between the source and the target domains. In this paper, we study the problem of classifying social media posts during a crisis event (e.g., Earthquake). For that, we use labeled and unlabeled data from past similar events (e.g., Flood) and unlabeled data for the current event. We propose a novel model that performs adversarial learning based domain adaptation to deal with distribution drifts and graph based semi-supervised learning to leverage unlabeled data within a single unified deep learning framework. Our experiments with two real-world crisis datasets collected from Twitter demonstrate significant improvements over several baselines.", "title": "" }, { "docid": "neg:1840291_17", "text": "A backstepping approach is proposed in this paper to cope with the failure of a quadrotor propeller. The presented methodology supposes to turn off also the motor which is opposite to the broken one. In this way, a birotor configuration with fixed propellers is achieved. The birotor is controlled to follow a planned emergency landing trajectory. Theory shows that the birotor can reach any point in the Cartesian space losing the possibility to control the yaw angle. Simulation tests are employed to validate the proposed controller design.", "title": "" }, { "docid": "neg:1840291_18", "text": "General as well as the MSW management in Thailand is reviewed in this paper. Topics include the MSW generation, sources, composition, and trends. The review, then, moves to sustainable solutions for MSW management, sustainable alternative approaches with an emphasis on an integrated MSW management. Information of waste in Thailand is also given at the beginning of this paper for better understanding of later contents. It is clear that no one single method of MSW disposal can deal with all materials in an environmentally sustainable way. As such, a suitable approach in MSW management should be an integrated approach that could deliver both environmental and economic sustainability. With increasing environmental concerns, the integrated MSW management system has a potential to maximize the useable waste materials as well as produce energy as a by-product. In Thailand, the compositions of waste (86%) are mainly organic waste, paper, plastic, glass, and metal. As a result, the waste in Thailand is suitable for an integrated MSW management. Currently, the Thai national waste management policy starts to encourage the local administrations to gather into clusters to establish central MSW disposal facilities with suitable technologies and reducing the disposal cost based on the amount of MSW generated. Keywords— MSW, management, sustainable, Thailand", "title": "" }, { "docid": "neg:1840291_19", "text": "This paper describes the hardware and software design of the kidsize humanoid robot systems of the Darmstadt Dribblers in 2007. The robots are used as a vehicle for research in control of locomotion and behavior of autonomous humanoid robots and robot teams with many degrees of freedom and many actuated joints. The Humanoid League of RoboCup provides an ideal testbed for such aspects of dynamics in motion and autonomous behavior as the problem of generating and maintaining statically or dynamically stable bipedal locomotion is predominant for all types of vision guided motions during a soccer game. A modular software architecture as well as further technologies have been developed for efficient and effective implementation and test of modules for sensing, planning, behavior, and actions of humanoid robots.", "title": "" } ]
1840292
Sequence Discriminative Training for Offline Handwriting Recognition by an Interpolated CTC and Lattice-Free MMI Objective Function
[ { "docid": "pos:1840292_0", "text": "Recurrent neural networks (RNNs) with Long Short-Term memory cells currently hold the best known results in unconstrained handwriting recognition. We show that their performance can be greatly improved using dropout - a recently proposed regularization method for deep architectures. While previous works showed that dropout gave superior performance in the context of convolutional networks, it had never been applied to RNNs. In our approach, dropout is carefully used in the network so that it does not affect the recurrent connections, hence the power of RNNs in modeling sequences is preserved. Extensive experiments on a broad range of handwritten databases confirm the effectiveness of dropout on deep architectures even when the network mainly consists of recurrent and shared connections.", "title": "" }, { "docid": "pos:1840292_1", "text": "We describe Microsoft's conversational speech recognition system, in which we combine recent developments in neural-network-based acoustic and language modeling to advance the state of the art on the Switchboard recognition task. Inspired by machine learning ensemble techniques, the system uses a range of convolutional and recurrent neural networks. I-vector modeling and lattice-free MMI training provide significant gains for all acoustic model architectures. Language model rescoring with multiple forward and backward running RNNLMs, and word posterior-based system combination provide a 20% boost. The best single system uses a ResNet architecture acoustic model with RNNLM rescoring, and achieves a word error rate of 6.9% on the NIST 2000 Switchboard task. The combined system has an error rate of 6.2%, representing an improvement over previously reported results on this benchmark task.", "title": "" } ]
[ { "docid": "neg:1840292_0", "text": "Spectral graph partitioning provides a powerful approach to image segmentation. We introduce an alternate idea that finds partitions with a small isoperimetric constant, requiring solution to a linear system rather than an eigenvector problem. This approach produces the high quality segmentations of spectral methods, but with improved speed and stability.", "title": "" }, { "docid": "neg:1840292_1", "text": "Rolling upgrade consists of upgrading progressively the servers of a distributed system to reduce service downtime.Upgrading a subset of servers requires a well-engineered cluster membership protocol to maintain, in the meantime, the availability of the system state. Existing cluster membership reconfigurations, like CoreOS etcd, rely on a primary not only for reconfiguration but also for storing information. At any moment, there can be at most one primary, whose replacement induces disruption. We propose Rollup, a non-disruptive rolling upgrade protocol with a fast consensus-based reconfiguration. Rollup relies on a candidate leader only for the reconfiguration and scalable biquorums for service requests. While Rollup implements a non-disruptive cluster membership protocol, it does not offer a full-fledged coordination service. We analyzed Rollup theoretically and experimentally on an isolated network of 26 physical machines and an Amazon EC2 cluster of 59 virtual machines. Our results show an 8-fold speedup compared to a rolling upgrade based on a primary for reconfiguration.", "title": "" }, { "docid": "neg:1840292_2", "text": "Leadership is one of the most discussed and important topics in the social sciences especially in organizational theory and management. Generally leadership is the process of influencing group activities towards the achievement of goals. A lot of researches have been conducted in this area .some researchers investigated individual characteristics such as demographics, skills and abilities, and personality traits, predict leadership effectiveness. Different theories, leadership styles and models have been propounded to provide explanations on the leadership phenomenon and to help leaders influence their followers towards achieving organizational goals. Today with the change in organizations and business environment the leadership styles and theories have been changed. In this paper, we review the new leadership theories and styles that are new emerging and are according to the need of the organizations. Leadership styles and theories have been investigated to get the deep understanding of the new trends and theories of the leadership to help the managers and organizations choose appropriate style of leadership. key words: new emerging styles, new theories, leadership, organization", "title": "" }, { "docid": "neg:1840292_3", "text": "In this paper, an implementation of an extended target tracking filter using measurements from high-resolution automotive Radio Detection and Ranging (RADAR) is proposed. Our algorithm uses the Cartesian point measurements from the target's contour as well as the Doppler range rate provided by the RADAR to track a target vehicle's position, orientation, and translational and rotational velocities. We also apply a Gaussian Process (GP) to model the vehicle's shape. To cope with the nonlinear measurement equation, we implement an Extended Kalman Filter (EKF) and provide the necessary derivatives for the Doppler measurement. We then evaluate the effectiveness of incorporating the Doppler rate on simulations and on 2 sets of real data.", "title": "" }, { "docid": "neg:1840292_4", "text": "We propose a novel adversarial multi-task learning scheme, aiming at actively curtailing the inter-talker feature variability while maximizing its senone discriminability so as to enhance the performance of a deep neural network (DNN) based ASR system. We call the scheme speaker-invariant training (SIT). In SIT, a DNN acoustic model and a speaker classifier network are jointly optimized to minimize the senone (tied triphone state) classification loss, and simultaneously mini-maximize the speaker classification loss. A speaker-invariant and senone-discriminative deep feature is learned through this adversarial multi-task learning. With SIT, a canonical DNN acoustic model with significantly reduced variance in its output probabilities is learned with no explicit speaker-independent (SI) transformations or speaker-specific representations used in training or testing. Evaluated on the CHiME-3 dataset, the SIT achieves 4.99% relative word error rate (WER) improvement over the conventional SI acoustic model. With additional unsupervised speaker adaptation, the speaker-adapted (SA) SIT model achieves 4.86% relative WER gain over the SA SI acoustic model.", "title": "" }, { "docid": "neg:1840292_5", "text": "Video affective content analysis has been an active research area in recent decades, since emotion is an important component in the classification and retrieval of videos. Video affective content analysis can be divided into two approaches: direct and implicit. Direct approaches infer the affective content of videos directly from related audiovisual features. Implicit approaches, on the other hand, detect affective content from videos based on an automatic analysis of a user's spontaneous response while consuming the videos. This paper first proposes a general framework for video affective content analysis, which includes video content, emotional descriptors, and users' spontaneous nonverbal responses, as well as the relationships between the three. Then, we survey current research in both direct and implicit video affective content analysis, with a focus on direct video affective content analysis. Lastly, we identify several challenges in this field and put forward recommendations for future research.", "title": "" }, { "docid": "neg:1840292_6", "text": "In this paper, we study a novel approach for named entity recognition (NER) and mention detection in natural language processing. Instead of treating NER as a sequence labelling problem, we propose a new local detection approach, which rely on the recent fixed-size ordinally forgetting encoding (FOFE) method to fully encode each sentence fragment and its left/right contexts into a fixed-size representation. Afterwards, a simple feedforward neural network is used to reject or predict entity label for each individual fragment. The proposed method has been evaluated in several popular NER and mention detection tasks, including the CoNLL 2003 NER task and TAC-KBP2015 and TAC-KBP2016 Trilingual Entity Discovery and Linking (EDL) tasks. Our methods have yielded pretty strong performance in all of these examined tasks. This local detection approach has shown many advantages over the traditional sequence labelling methods.", "title": "" }, { "docid": "neg:1840292_7", "text": "Total Quality Management (TQM) has become, according to one source, 'as pervasive a part of business thinking as quarterly financial results,' and yet TQM's role as a strategic resource remains virtually unexamined in strategic management research. Drawing on the resource approach and other theoretical perspectives, this article examines TQM as a potential source of sustainable competitive advantage, reviews existing empirical evidence, and reports findings from a new empirical study of TQM's performance consequences. The findings suggest that most features generally associated with TQM—such as quality training, process improvement, and benchmarking—do not generally produce advantage, but that certain tacit, behavioral, imperfectly imitable features—such as open culture, employee empowerment, and executive commitment—can produce advantage. The author concludes that these tacit resources, and not TQM tools and techniques, drive TQM success, and that organizations that acquire them can outperform competitors with or without the accompanying TQM ideology.", "title": "" }, { "docid": "neg:1840292_8", "text": "In this paper, a progressive learning technique for multi-class classification is proposed. This newly developed learning technique is independent of the number of class constraints and it can learn new classes while still retaining the knowledge of previous classes. Whenever a new class (non-native to the knowledge learnt thus far) is encountered, the neural network structure gets remodeled automatically by facilitating new neurons and interconnections, and the parameters are calculated in such a way that it retains the knowledge learnt thus far. This technique is suitable for realworld applications where the number of classes is often unknown and online learning from real-time data is required. The consistency and the complexity of the progressive learning technique are analyzed. Several standard datasets are used to evaluate the performance of the developed technique. A comparative study shows that the developed technique is superior. Key Words—Classification, machine learning, multi-class, sequential learning, progressive learning.", "title": "" }, { "docid": "neg:1840292_9", "text": "We present the new HippoCampus micro underwater vehicle, first introduced in [1]. It is designed for monitoring confined fluid volumes. These tightly constrained settings demand agile vehicle dynamics. Moreover, we adapt a robust attitude control scheme for aerial drones to the underwater domain. We demonstrate the performance of the controller with a challenging maneuver. A submerged Furuta pendulum is stabilized by HippoCampus after a swing-up. The experimental results reveal the robustness of the control method, as the system quickly recovers from strong physical disturbances, which are applied to the system.", "title": "" }, { "docid": "neg:1840292_10", "text": "This paper presents a proportional integral derivative (PID) controller with a derivative filter coefficient to control a twin rotor multiple input multiple output system (TRMS), which is a nonlinear system with two degrees of freedom and cross couplings. The mathematical modeling of TRMS is done using MATLAB/Simulink. The simulation results are compared with the results of conventional PID controller. The results of proposed PID controller with derivative filter shows better transient and steady state response as compared to conventional PID controller.", "title": "" }, { "docid": "neg:1840292_11", "text": "A wide variety of non-photorealistic rendering techniques make use of random variation in the placement or appearance of primitives. In order to avoid the \"shower-door\" effect, this random variation should move with the objects in the scene. Here we present coherent noise tailored to this purpose. We compute the coherent noise with a specialized filter that uses the depth and velocity fields of a source sequence. The computation is fast and suitable for interactive applications like games.", "title": "" }, { "docid": "neg:1840292_12", "text": "Resonant converters which use a small DC bus capacitor to achieve high power factor are desirable for low cost Inductive Power Transfer (IPT) applications but produce amplitude modulated waveforms which are then present on any coupled load. The modulated coupled voltage produces pulse currents which could be used for battery charging purposes. In order to understand the effects of such pulse charging, two Lithium Iron Phosphate (LiFePO4) batteries underwent 2000 cycles of charge and discharging cycling utilizing both pulse and DC charging profiles. The cycling results show that such pulse charging is comparable to conventional DC charging and may be suitable for low cost battery charging applications without impacting battery life.", "title": "" }, { "docid": "neg:1840292_13", "text": "Curriculum learning (CL) or self-paced learning (SPL) represents a recently proposed learning regime inspired by the learning process of humans and animals that gradually proceeds from easy to more complex samples in training. The two methods share a similar conceptual learning paradigm, but differ in specific learning schemes. In CL, the curriculum is predetermined by prior knowledge, and remain fixed thereafter. Therefore, this type of method heavily relies on the quality of prior knowledge while ignoring feedback about the learner. In SPL, the curriculum is dynamically determined to adjust to the learning pace of the leaner. However, SPL is unable to deal with prior knowledge, rendering it prone to overfitting. In this paper, we discover the missing link between CL and SPL, and propose a unified framework named self-paced curriculum leaning (SPCL). SPCL is formulated as a concise optimization problem that takes into account both prior knowledge known before training and the learning progress during training. In comparison to human education, SPCL is analogous to “instructor-student-collaborative” learning mode, as opposed to “instructor-driven” in CL or “student-driven” in SPL. Empirically, we show that the advantage of SPCL on two tasks. Curriculum learning (Bengio et al. 2009) and self-paced learning (Kumar, Packer, and Koller 2010) have been attracting increasing attention in the field of machine learning and artificial intelligence. Both the learning paradigms are inspired by the learning principle underlying the cognitive process of humans and animals, which generally start with learning easier aspects of a task, and then gradually take more complex examples into consideration. The intuition can be explained in analogous to human education in which a pupil is supposed to understand elementary algebra before he or she can learn more advanced algebra topics. This learning paradigm has been empirically demonstrated to be instrumental in avoiding bad local minima and in achieving a better generalization result (Khan, Zhu, and Mutlu 2011; Basu and Christensen 2013; Tang et al. 2012). A curriculum determines a sequence of training samples which essentially corresponds to a list of samples ranked in ascending order of learning difficulty. A major disparity Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. between curriculum learning (CL) and self-paced learning (SPL) lies in the derivation of the curriculum. In CL, the curriculum is assumed to be given by an oracle beforehand, and remains fixed thereafter. In SPL, the curriculum is dynamically generated by the learner itself, according to what the learner has already learned. The advantage of CL includes the flexibility to incorporate prior knowledge from various sources. Its drawback stems from the fact that the curriculum design is determined independently of the subsequent learning, which may result in inconsistency between the fixed curriculum and the dynamically learned models. From the optimization perspective, since the learning proceeds iteratively, there is no guarantee that the predetermined curriculum can even lead to a converged solution. SPL, on the other hand, formulates the learning problem as a concise biconvex problem, where the curriculum design is embedded and jointly learned with model parameters. Therefore, the learned model is consistent. However, SPL is limited in incorporating prior knowledge into learning, rendering it prone to overfitting. Ignoring prior knowledge is less reasonable when reliable prior information is available. Since both methods have their advantages, it is difficult to judge which one is better in practice. In this paper, we discover the missing link between CL and SPL. We formally propose a unified framework called Self-paced Curriculum Leaning (SPCL). SPCL represents a general learning paradigm that combines the merits from both the CL and SPL. On one hand, it inherits and further generalizes the theory of SPL. On the other hand, SPCL addresses the drawback of SPL by introducing a flexible way to incorporate prior knowledge. This paper also discusses concrete implementations within the proposed framework, which can be useful for solving various problems. This paper offers a compelling insight on the relationship between the existing CL and SPL methods. Their relation can be intuitively explained in the context of human education, in which SPCL represents an “instructor-student collaborative” learning paradigm, as opposed to “instructordriven” in CL or “student-driven” in SPL. In SPCL, instructors provide prior knowledge on a weak learning sequence of samples, while leaving students the freedom to decide the actual curriculum according to their learning pace. Since an optimal curriculum for the instructor may not necessarily be optimal for all students, we hypothesize that given reasonable prior knowledge, the curriculum devised by instructors and students together can be expected to be better than the curriculum designed by either part alone. Empirically, we substantiate this hypothesis by demonstrating that the proposed method outperforms both CL and SPL on two tasks. The rest of the paper is organized as follows. We first briefly introduce the background knowledge on CL and SPL. Then we propose the model and the algorithm of SPCL. After that, we discuss concrete implementations of SPCL. The experimental results and conclusions are presented in the last two sections. Background Knowledge", "title": "" }, { "docid": "neg:1840292_14", "text": "In this study, we tried to find a solution for inpainting problem using deep convolutional autoencoders. A new training approach has been proposed as an alternative to the Generative Adversarial Networks. The neural network that designed for inpainting takes an image, which the certain part of its center is extracted, as an input then it attempts to fill the blank region. During the training phase, a distinct deep convolutional neural network is used and it is called Advisor Network. We show that the features extracted from intermediate layers of the Advisor Network, which is trained on a different dataset for classification, improves the performance of the autoencoder.", "title": "" }, { "docid": "neg:1840292_15", "text": "BACKGROUND\nMango is a highly perishable seasonal fruit and large quantities are wasted during the peak season as a result of poor postharvest handling procedures. Processing surplus mango fruits into flour to be used as a functional ingredient appears to be a good preservation method to ensure its extended consumption.\n\n\nRESULTS\nIn the present study, the chemical composition, bioactive/antioxidant compounds and functional properties of green and ripe mango (Mangifera indica var. Chokanan) peel and pulp flours were evaluated. Compared to commercial wheat flour, mango flours were significantly low in moisture and protein, but were high in crude fiber, fat and ash content. Mango flour showed a balance between soluble and insoluble dietary fiber proportions, with total dietary fiber content ranging from 3.2 to 5.94 g kg⁻¹. Mango flours exhibited high values for bioactive/antioxidant compounds compared to wheat flour. The water absorption capacity and oil absorption capacity of mango flours ranged from 0.36 to 0.87 g kg⁻¹ and from 0.18 to 0.22 g kg⁻¹, respectively.\n\n\nCONCLUSION\nResults of this study showed mango peel flour to be a rich source of dietary fiber with good antioxidant and functional properties, which could be a useful ingredient for new functional food formulations.", "title": "" }, { "docid": "neg:1840292_16", "text": "The danger of SQL injections has been known for more than a decade but injection attacks have led the OWASP top 10 for years and still are one of the major reasons for devastating attacks on web sites. As about 24% percent of the top 10 million web sites are built upon the content management system WordPress, it's no surprise that content management systems in general and WordPress in particular are frequently targeted. To understand how the underlying security bugs can be discovered and exploited by attackers, 199 publicly disclosed SQL injection exploits for WordPress and its plugins have been analyzed. The steps an attacker would take to uncover and utilize these bugs are followed in order to gain access to the underlying database through automated, dynamic vulnerability scanning with well-known, freely available tools. Previous studies have shown that the majority of the security bugs are caused by the same programming errors as 10 years ago and state that the complexity of finding and exploiting them has not increased significantly. Furthermore, they claim that although the complexity has not increased, automated tools still do not detect the majority of bugs. The results of this paper show that tools for automated, dynamic vulnerability scanning only play a subordinate role for developing exploits. The reason for this is that only a small percentage of attack vectors can be found during the detection phase. So even if the complexity of exploiting an attack vector has not increased, this attack vector has to be found in the first place, which is the major challenge for this kind of tools. Therefore, from today's perspective, a combination with manual and/or static analysis is essential when testing for security vulnerabilities.", "title": "" }, { "docid": "neg:1840292_17", "text": "We describe Algorithm 2 in detail. Algorithm 2 takes as input the sample set S, the query sequence F , the sensitivity of query ∆, the threshold τ , and the stop parameter s. Algorithm 2 outputs the result of each comparison with the threshold. In Algorithm 2, each noisy query output is compred with a noisy threshold at line 4 and outputs the result of comparison. Let ⊤ mean that fk(S) > τ . Algorithm 2 is terminated if outputs ⊤ s times.", "title": "" }, { "docid": "neg:1840292_18", "text": "Dragon's blood is one of the renowned traditional medicines used in different cultures of world. It has got several therapeutic uses: haemostatic, antidiarrhetic, antiulcer, antimicrobial, antiviral, wound healing, antitumor, anti-inflammatory, antioxidant, etc. Besides these medicinal applications, it is used as a coloring material, varnish and also has got applications in folk magic. These red saps and resins are derived from a number of disparate taxa. Despite its wide uses, little research has been done to know about its true source, quality control and clinical applications. In this review, we have tried to overview different sources of Dragon's blood, its source wise chemical constituents and therapeutic uses. As well as, a little attempt has been done to review the techniques used for its quality control and safety.", "title": "" }, { "docid": "neg:1840292_19", "text": "When automatic plagiarism detection is carried out considering a reference corpus, a suspicious text is compared to a set of original documents in order to relate the plagiarised text fragments to their potential source. One of the biggest difficulties in this task is to locate plagiarised fragments that have been modified (by rewording, insertion or deletion, for example) from the source text. The definition of proper text chunks as comparison units of the suspicious and original texts is crucial for the success of this kind of applications. Our experiments with the METER corpus show that the best results are obtained when considering low level word n-grams comparisons (n = {2, 3}).", "title": "" } ]
1840293
Animated narrative visualization for video clickstream data
[ { "docid": "pos:1840293_0", "text": "Web clickstream data are routinely collected to study how users browse the web or use a service. It is clear that the ability to recognize and summarize user behavior patterns from such data is valuable to e-commerce companies. In this paper, we introduce a visual analytics system to explore the various user behavior patterns reflected by distinct clickstream clusters. In a practical analysis scenario, the system first presents an overview of clickstream clusters using a Self-Organizing Map with Markov chain models. Then the analyst can interactively explore the clusters through an intuitive user interface. He can either obtain summarization of a selected group of data or further refine the clustering result. We evaluated our system using two different datasets from eBay. Analysts who were working on the same data have confirmed the system's effectiveness in extracting user behavior patterns from complex datasets and enhancing their ability to reason.", "title": "" } ]
[ { "docid": "neg:1840293_0", "text": "Wikipedia infoboxes are a valuable source of structured knowledge for global knowledge sharing. However, infobox information is very incomplete and imbalanced among the Wikipedias in different languages. It is a promising but challenging problem to utilize the rich structured knowledge from a source language Wikipedia to help complete the missing infoboxes for a target language. In this paper, we formulate the problem of cross-lingual knowledge extraction from multilingual Wikipedia sources, and present a novel framework, called WikiCiKE, to solve this problem. An instancebased transfer learning method is utilized to overcome the problems of topic drift and translation errors. Our experimental results demonstrate that WikiCiKE outperforms the monolingual knowledge extraction method and the translation-based method.", "title": "" }, { "docid": "neg:1840293_1", "text": "As a first step towards agents learning to communicate about their visual environment, we propose a system that, given visual representations of a referent (CAT) and a context (SOFA), identifies their discriminative attributes, i.e., properties that distinguish them (has_tail). Moreover, although supervision is only provided in terms of discriminativeness of attributes for pairs, the model learns to assign plausible attributes to specific objects (SOFA-has_cushion). Finally, we present a preliminary experiment confirming the referential success of the predicted discriminative attributes.", "title": "" }, { "docid": "neg:1840293_2", "text": "Ultra-wideband radar is an excellent tool for nondestructive examination of walls and highway structures. Therefore often steep edged narrow pulses with rise-, fall-times in the range of 100 ps are used. For digitizing of the reflected pulses a down conversion has to be accomplished. A new low cost sampling down converter with a sampling phase detector for use in ultra-wideband radar applications is presented.", "title": "" }, { "docid": "neg:1840293_3", "text": "This article reports on a helical spring-like piezoresistive graphene strain sensor formed within a microfluidic channel. The helical spring has a tubular hollow structure and is made of a thin graphene layer coated on the inner wall of the channel using an in situ microfluidic casting method. The helical shape allows the sensor to flexibly respond to both tensile and compressive strains in a wide dynamic detection range from 24 compressive strain to 20 tensile strain. Fabrication of the sensor involves embedding a helical thin metal wire with a plastic wrap into a precursor solution of an elastomeric polymer, forming a helical microfluidic channel by removing the wire from cured elastomer, followed by microfluidic casting of a graphene thin layer directly inside the helical channel. The wide dynamic range, in conjunction with mechanical flexibility and stretchability of the sensor, will enable practical wearable strain sensor applications where large strains are often involved.", "title": "" }, { "docid": "neg:1840293_4", "text": "We propose a Dynamic-Spatial-Attention (DSA) Recurrent Neural Network (RNN) for anticipating accidents in dashcam videos (Fig. 1). Our DSA-RNN learns to (1) distribute soft-attention to candidate objects dynamically to gather subtle cues and (2) model the temporal dependencies of all cues to robustly anticipate an accident. Anticipating accidents is much less addressed than anticipating events such as changing a lane, making a turn, etc., since accidents are rare to be observed and can happen in many different ways mostly in a sudden. To overcome these challenges, we (1) utilize state-of-the-art object detector [3] to detect candidate objects, and (2) incorporate full-frame and object-based appearance and motion features in our model. We also harvest a diverse dataset of 678 dashcam accident videos on the web (Fig. 3). The dataset is unique, since various accidents (e.g., a motorbike hits a car, a car hits another car, etc.) occur in all videos. We manually mark the time-location of accidents and use them as supervision to train and evaluate our method. We show that our method anticipates accidents about 2 seconds before they occur with 80% recall and 56.14% precision. Most importantly, it achieves the highest mean average precision (74.35%) outperforming other baselines without attention or RNN. 2 Fu-Hsiang Chan, Yu-Ting Chen, Yu Xiang, Min Sun", "title": "" }, { "docid": "neg:1840293_5", "text": "According to the most common definition, idioms are linguistic expressions whose overall meaning cannot be predicted from the meanings of the constituent parts Although we agree with the traditional view that there is no complete predictability, we suggest that there is a great deal of systematic conceptual motivation for the meaning of most idioms Since most idioms are based on conceptual metaphors and metonymies, systematic motivation arises from sets of 'conceptual mappings or correspondences' that obtain between a source and a target domain in the sense of Lakoff and Koiecses (1987) We distinguish among three aspects of idiomatic meaning First, the general meaning of idioms appears to be determined by the particular 'source domains' that apply to a particular target domain Second, more specific aspects ot idiomatic meaning are provided by the 'ontological mapping that applies to a given idiomatic expression Third, connotative aspects ot idiomatic meaning can be accounted for by 'epistemic correspondences' Finally, we also present an informal experimental study the results of which show that the cognitive semantic view can facilitate the learning of idioms for non-native speakers", "title": "" }, { "docid": "neg:1840293_6", "text": "Today, artificial neural networks (ANNs) are widely used in a variety of applications, including speech recognition, face detection, disease diagnosis, etc. And as the emerging field of ANNs, Long Short-Term Memory (LSTM) is a recurrent neural network (RNN) which contains complex computational logic. To achieve high accuracy, researchers always build large-scale LSTM networks which are time-consuming and power-consuming. In this paper, we present a hardware accelerator for the LSTM neural network layer based on FPGA Zedboard and use pipeline methods to parallelize the forward computing process. We also implement a sparse LSTM hidden layer, which consumes fewer storage resources than the dense network. Our accelerator is power-efficient and has a higher speed than ARM Cortex-A9 processor.", "title": "" }, { "docid": "neg:1840293_7", "text": "-risks, and balance formalization and portfolio management, improvement.", "title": "" }, { "docid": "neg:1840293_8", "text": "............................................................................................................................................... 4", "title": "" }, { "docid": "neg:1840293_9", "text": "Inflammation clearly occurs in pathologically vulnerable regions of the Alzheimer's disease (AD) brain, and it does so with the full complexity of local peripheral inflammatory responses. In the periphery, degenerating tissue and the deposition of highly insoluble abnormal materials are classical stimulants of inflammation. Likewise, in the AD brain damaged neurons and neurites and highly insoluble amyloid beta peptide deposits and neurofibrillary tangles provide obvious stimuli for inflammation. Because these stimuli are discrete, microlocalized, and present from early preclinical to terminal stages of AD, local upregulation of complement, cytokines, acute phase reactants, and other inflammatory mediators is also discrete, microlocalized, and chronic. Cumulated over many years, direct and bystander damage from AD inflammatory mechanisms is likely to significantly exacerbate the very pathogenic processes that gave rise to it. Thus, animal models and clinical studies, although still in their infancy, strongly suggest that AD inflammation significantly contributes to AD pathogenesis. By better understanding AD inflammatory and immunoregulatory processes, it should be possible to develop anti-inflammatory approaches that may not cure AD but will likely help slow the progression or delay the onset of this devastating disorder.", "title": "" }, { "docid": "neg:1840293_10", "text": "This article familiarizes counseling psychologists with qualitative research methods in psychology developed in the tradition of European phenomenology. A brief history includes some of Edmund Husserl’s basic methods and concepts, the adoption of existential-phenomenology among psychologists, and the development and formalization of qualitative research procedures in North America. The choice points and alternatives in phenomenological research in psychology are delineated. The approach is illustrated by a study of a recovery program for persons repeatedly hospitalized for chronic mental illness. Phenomenological research is compared with other qualitative methods, and some of its benefits for counseling psychology are identified.", "title": "" }, { "docid": "neg:1840293_11", "text": "Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as Pose guided structured Region Ensemble Network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms.", "title": "" }, { "docid": "neg:1840293_12", "text": "Scholarly citations from one publication to another, expressed as reference lists within academic articles, are core elements of scholarly communication. Unfortunately, they usually can be accessed en masse only by paying significant subscription fees to commercial organizations, while those few services that do made them available for free impose strict limitations on their reuse. In this paper we provide an overview of the OpenCitations Project (http://opencitations.net) undertaken to remedy this situation, and of its main product, the OpenCitations Corpus, which is an open repository of accurate bibliographic citation data harvested from the scholarly literature, made available in RDF under a Creative Commons public domain dedication. RASH version: https://w3id.org/oc/paper/occ-lisc2016.html", "title": "" }, { "docid": "neg:1840293_13", "text": "The task of discriminating one object from another is almost trivial for a human being. However, this task is computationally taxing for most modern machine learning methods, whereas, we perform this task at ease given very few examples for learning. It has been proposed that the quick grasp of concept may come from the shared knowledge between the new example and examples previously learned. We believe that the key to one-shot learning is the sharing of common parts as each part holds immense amounts of information on how a visual concept is constructed. We propose an unsupervised method for learning a compact dictionary of image patches representing meaningful components of an objects. Using those patches as features, we build a compositional model that outperforms a number of popular algorithms on a one-shot learning task. We demonstrate the effectiveness of this approach on hand-written digits and show that this model generalizes to multiple datasets.", "title": "" }, { "docid": "neg:1840293_14", "text": "Self-paced learning and hard example mining re-weight training instances to improve learning accuracy. This paper presents two improved alternatives based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of minibatch SGD, and the proximity of the correct class probability to the decision threshold. Extensive experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques, such as residual learning, momentum, ADAM, batch normalization, dropout, and distillation.", "title": "" }, { "docid": "neg:1840293_15", "text": "The human hand moves in complex and high-dimensional ways, making estimation of 3D hand pose configurations from images alone a challenging task. In this work we propose a method to learn a statistical hand model represented by a cross-modal trained latent space via a generative deep neural network. We derive an objective function from the variational lower bound of the VAE framework and jointly optimize the resulting cross-modal KL-divergence and the posterior reconstruction objective, naturally admitting a training regime that leads to a coherent latent space across multiple modalities such as RGB images, 2D keypoint detections or 3D hand configurations. Additionally, it grants a straightforward way of using semi-supervision. This latent space can be directly used to estimate 3D hand poses from RGB images, outperforming the state-of-the art in different settings. Furthermore, we show that our proposed method can be used without changes on depth images and performs comparably to specialized methods. Finally, the model is fully generative and can synthesize consistent pairs of hand configurations across modalities. We evaluate our method on both RGB and depth datasets and analyze the latent space qualitatively.", "title": "" }, { "docid": "neg:1840293_16", "text": "Intrusion detection (ID) is an important component of infrastructure protection mechanisms. Intrusion detection systems (IDSs) need to be accurate, adaptive, and extensible. Given these requirements and the complexities of today's network environments, we need a more systematic and automated IDS development process rather that the pure knowledge encoding and engineering approaches. This article describes a novel framework, MADAM ID, for Mining Audit Data for Automated Models for Instrusion Detection. This framework uses data mining algorithms to compute activity patterns from system audit data and extracts predictive features from the patterns. It then applies machine learning algorithms to the audit records taht are processed according to the feature definitions to generate intrusion detection rules. Results from the 1998 DARPA Intrusion Detection Evaluation showed that our ID model was one of the best performing of all the participating systems. We also briefly discuss our experience in converting the detection models produced by off-line data mining programs to real-time modules of existing IDSs.", "title": "" }, { "docid": "neg:1840293_17", "text": "This paper outlines an innovative software development that utilizes Quality of Service (QoS) and parallel technologies in Cisco Catalyst Switches to increase the analytical performance of a Network Intrusion Detection and Protection System (NIDPS) when deployed in highspeed networks. We have designed a real network to present experiments that use a Snort NIDPS. Our experiments demonstrate the weaknesses of NIDPSes, such as inability to process multiple packets and propensity to drop packets in heavy traffic and high-speed networks without analysing them. We tested Snort’s analysis performance, gauging the number of packets sent, analysed, dropped, filtered, injected, and outstanding. We suggest using QoS configuration technologies in a Cisco Catalyst 3560 Series Switch and parallel Snorts to improve NIDPS performance and to reduce the number of dropped packets. Our results show that our novel configuration improves performance.", "title": "" } ]
1840294
Predicting Motivations of Actions by Leveraging Text
[ { "docid": "pos:1840294_0", "text": "Discriminative training approaches like structural SVMs have shown much promise for building highly complex and accurate models in areas like natural language processing, protein structure prediction, and information retrieval. However, current training algorithms are computationally expensive or intractable on large datasets. To overcome this bottleneck, this paper explores how cutting-plane methods can provide fast training not only for classification SVMs, but also for structural SVMs. We show that for an equivalent “1-slack” reformulation of the linear SVM training problem, our cutting-plane method has time complexity linear in the number of training examples. In particular, the number of iterations does not depend on the number of training examples, and it is linear in the desired precision and the regularization parameter. Furthermore, we present an extensive empirical evaluation of the method applied to binary classification, multi-class classification, HMM sequence tagging, and CFG parsing. The experiments show that the cutting-plane algorithm is broadly applicable and fast in practice. On large datasets, it is typically several orders of magnitude faster than conventional training methods derived from decomposition methods like SVM-light, or conventional cutting-plane methods. Implementations of our methods are available at www.joachims.org .", "title": "" } ]
[ { "docid": "neg:1840294_0", "text": "The treatment of external genitalia trauma is diverse according to the nature of trauma and injured anatomic site. The classification of trauma is important to establish a strategy of treatment; however, to date there has been less effort to make a classification for trauma of external genitalia. The classification of external trauma in male could be established by the nature of injury mechanism or anatomic site: accidental versus self-mutilation injury and penis versus penis plus scrotum or perineum. Accidental injury covers large portion of external genitalia trauma because of high prevalence and severity of this disease. The aim of this study is to summarize the mechanism and treatment of the traumatic injury of penis. This study is the first review describing the issue.", "title": "" }, { "docid": "neg:1840294_1", "text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .", "title": "" }, { "docid": "neg:1840294_2", "text": "Fisher Kernels and Deep Learning were two developments with significant impact on large-scale object categorization in the last years. Both approaches were shown to achieve state-of-the-art results on large-scale object categorization datasets, such as ImageNet. Conceptually, however, they are perceived as very different and it is not uncommon for heated debates to spring up when advocates of both paradigms meet at conferences or workshops. In this work, we emphasize the similarities between both architectures rather than their differences and we argue that such a unified view allows us to transfer ideas from one domain to the other. As a concrete example we introduce a method for learning a support vector machine classifier with Fisher kernel at the same time as a task-specific data representation. We reinterpret the setting as a multi-layer feed forward network. Its final layer is the classifier, parameterized by a weight vector, and the two previous layers compute Fisher vectors, parameterized by the coefficients of a Gaussian mixture model. We introduce a gradient descent based learning algorithm that, in contrast to other feature learning techniques, is not just derived from intuition or biological analogy, but has a theoretical justification in the framework of statistical learning theory. Our experiments show that the new training procedure leads to significant improvements in classification accuracy while preserving the modularity and geometric interpretability of a support vector machine setup.", "title": "" }, { "docid": "neg:1840294_3", "text": "A decade earlier, work on modeling and analyzing social network, was primarily focused on manually collected datasets where the friendship links were sparse but relatively noise free (i.e. all links represented strong physical relation). With the popularity of online social networks, the notion of “friendship” changed dramatically. The data collection, now although automated, contains dense friendship links but the links contain noisier information (i.e. some weaker relationships). The aim of this study is to identify these weaker links and suggest how these links (identification) play a vital role in improving social media design elements such as privacy control, detection of auto-bots, friend introductions, information prioritization and so on. The binary metric used so far for modeling links in social network (i.e. friends or not) is of little importance as it groups all our relatives, close friends and acquaintances in the same category. Therefore a popular notion of tie-strength has been incorporated for modeling links. In this paper, a predictive model is presented that helps evaluate tie-strength for each link in network based on transactional features (e.g. communication, file transfer, photos). The model predicts tie strength with 76.4% efficiency. This work also suggests that important link properties manifest similarly across different social media sites.", "title": "" }, { "docid": "neg:1840294_4", "text": "The symmetric travelling salesman problem is a real world combinatorial optimization problem and a well researched domain. When solving combinatorial optimization problems such as the travelling salesman problem a low-level construction heuristic is usually used to create an initial solution, rather than randomly creating a solution, which is further optimized using techniques such as tabu search, simulated annealing and genetic algorithms, amongst others. These heuristics are usually manually derived by humans and this is a time consuming process requiring many man hours. The research presented in this paper forms part of a larger initiative aimed at automating the process of deriving construction heuristics for combinatorial optimization problems.\n The study investigates genetic programming to induce low-level construction heuristics for the symmetric travelling salesman problem. While this has been examined for other combinatorial optimization problems, to the authors' knowledge this is the first attempt at evolving low-level construction heuristics for the travelling salesman problem. In this study a generational genetic programming algorithm randomly creates an initial population of low-level construction heuristics which is iteratively refined over a set number of generations by the processes of fitness evaluation, selection of parents and application of genetic operators.\n The approach is tested on 23 problem instances, of varying problem characteristics, from the TSPLIB and VLSI benchmark sets. The evolved heuristics were found to perform better than the human derived heuristic, namely, the nearest neighbourhood heuristic, generally used to create initial solutions for the travelling salesman problem.", "title": "" }, { "docid": "neg:1840294_5", "text": "Ankylosing spondylitis (AS) is a chronic systemic inflammatory disease that affects mainly the axial skeleton and causes significant pain and disability. Aquatic (water-based) exercise may have a beneficial effect in various musculoskeletal conditions. The aim of this study was to compare the effectiveness of aquatic exercise interventions with land-based exercises (home-based exercise) in the treatment of AS. Patients with AS were randomly assigned to receive either home-based exercise or aquatic exercise treatment protocol. Home-based exercise program was demonstrated by a physiotherapist on one occasion and then, exercise manual booklet was given to all patients in this group. Aquatic exercise program consisted of 20 sessions, 5× per week for 4 weeks in a swimming pool at 32–33 °C. All the patients in both groups were assessed for pain, spinal mobility, disease activity, disability, and quality of life. Evaluations were performed before treatment (week 0) and after treatment (week 4 and week 12). The baseline and mean values of the percentage changes calculated for both groups were compared using independent sample t test. Paired t test was used for comparison of pre- and posttreatment values within groups. A total of 69 patients with AS were included in this study. We observed significant improvements for all parameters [pain score (VAS) visual analog scale, lumbar flexion/extension, modified Schober test, chest expansion, bath AS functional index, bath AS metrology index, bath AS disease activity index, and short form-36 (SF-36)] in both groups after treatment at week 4 and week 12 (p < 0.05). Comparison of the percentage changes of parameters both at week 4 and week 12 relative to pretreatment values showed that improvement in VAS (p < 0.001) and bodily pain (p < 0.001), general health (p < 0.001), vitality (p < 0.001), social functioning (p < 0.001), role limitations due to emotional problems (p < 0.001), and general mental health (p < 0.001) subparts of SF-36 were better in aquatic exercise group. It is concluded that a water-based exercises produced better improvement in pain score and quality of life of the patients with AS compared with home-based exercise.", "title": "" }, { "docid": "neg:1840294_6", "text": "My research has centered around understanding the colorful appearance of physical and digital paintings and images. My work focuses on decomposing images or videos into more editable data structures called layers, to enable efficient image or video re-editing. Given a time-lapse painting video, we can recover translucent layer strokes from every frame pairs by maximizing translucency of layers for its maximum re-usability, under either digital color compositing model or a physically inspired nonlinear color layering model, after which, we apply a spatial-temporal clustering on strokes to obtain semantic layers for further editing, such as global recoloring and local recoloring, spatial-temporal gradient recoloring and so on. With a single image input, we use the convex shape geometry intuition of color points distribution in RGB space, to help extract a small size palette from a image and then solve an optimization to extract translucent RGBA layers, under digital alpha compositing model. The translucent layers are suitable for global and local image recoloring and new object insertion as layers efficiently. Alternatively, we can apply an alternating least square optimization to extract multi-spectral physical pigment parameters from a single digitized physical painting image, under a physically inspired nonlinear color mixing model, with help of some multi-spectral pigment parameters priors. With these multi-spectral pigment parameters and their mixing layers, we demonstrate tonal adjustments, selection masking, recoloring, physical pigment understanding, palette summarization and edge enhancement. Our recent ongoing work introduces an extremely scalable and efficient yet simple palette-based image decomposition algorithm to extract additive mixing layers from single image. Our approach is based on the geometry of images in RGBXY-space. This new geometric approach is orders of magnitude more efficient than previous work and requires no numerical optimization. We demonstrate a real-time layer updating GUI. We also present a palette-based framework for color composition for visual applications, such as image and video harmonization, color transfer and so on.", "title": "" }, { "docid": "neg:1840294_7", "text": "American students rank well below international peers in the disciplines of science, technology, engineering, and mathematics (STEM). Early exposure to STEM-related concepts is critical to later academic achievement. Given the rise of tablet-computer use in early childhood education settings, interactive technology might be one particularly fruitful way of supplementing early STEM education. Using a between-subjects experimental design, we sought to determine whether preschoolers could learn a fundamental math concept (i.e., measurement with non-standard units) from educational technology, and whether interactivity is a crucial component of learning from that technology. Participants who either played an interactive tablet-based game or viewed a non-interactive video demonstrated greater transfer of knowledge than those assigned to a control condition. Interestingly, interactivity contributed to better performance on near transfer tasks, while participants in the non-interactive condition performed better on far transfer tasks. Our findings suggest that, while preschool-aged children can learn early STEM skills from educational technology, interactivity may only further support learning in certain", "title": "" }, { "docid": "neg:1840294_8", "text": "In this paper, we propose a new deep network that learns multi-level deep representations for image emotion classi fication (MldrNet). Image emotion can be recognized through image semantics, image aesthetics and low-level visual fea tures from both global and local views. Existing image emotion classification works using hand-crafted features o r deep features mainly focus on either low-level visual featu res or semantic-level image representations without taking al l factors into consideration. Our proposed MldrNet unifies deep representations of three levels, i.e. image semantics , image aesthetics and low-level visual features through mul tiple instance learning (MIL) in order to effectively cope wit h noisy labeled data, such as images collected from the Intern et. Extensive experiments on both Internet images and abstract paintings demonstrate the proposed method outperforms the state-of-the-art methods using deep features or hand-craf ted features. The proposed approach also outperforms the state of-the-art methods with at least 6% performance improvement in terms of overall classification accuracy.", "title": "" }, { "docid": "neg:1840294_9", "text": "Text categorization is the task of automatically assigning unlabeled text documents to some predefined category labels by means of an induction algorithm. Since the data in text categorization are high-dimensional, feature selection is broadly used in text categorization systems for reducing the dimensionality. In the literature, there are some widely known metrics such as information gain and document frequency thresholding. Recently, a generative graphical model called latent dirichlet allocation (LDA) that can be used to model and discover the underlying topic structures of textual data, was proposed. In this paper, we use the hidden topic analysis of LDA for feature selection and compare it with the classical feature selection metrics in text categorization. For the experiments, we use SVM as the classifier and tf∗idf weighting for weighting the terms. We observed that almost in all metrics, information gain performs best at all keyword numbers while the LDA-based metrics perform similar to chi-square and document frequency thresholding.", "title": "" }, { "docid": "neg:1840294_10", "text": "Melatonin, hormone of the pineal gland, is concerned with biological timing. It is secreted at night in all species and in ourselves is thereby associated with sleep, lowered core body temperature, and other night time events. The period of melatonin secretion has been described as 'biological night'. Its main function in mammals is to 'transduce' information about the length of the night, for the organisation of daylength dependent changes, such as reproductive competence. Exogenous melatonin has acute sleepiness-inducing and temperature-lowering effects during 'biological daytime', and when suitably timed (it is most effective around dusk and dawn) it will shift the phase of the human circadian clock (sleep, endogenous melatonin, core body temperature, cortisol) to earlier (advance phase shift) or later (delay phase shift) times. The shifts induced are sufficient to synchronise to 24 h most blind subjects suffering from non-24 h sleep-wake disorder, with consequent benefits for sleep. Successful use of melatonin's chronobiotic properties has been reported in other sleep disorders associated with abnormal timing of the circadian system: jetlag, shiftwork, delayed sleep phase syndrome, some sleep problems of the elderly. No long-term safety data exist, and the optimum dose and formulation for any application remains to be clarified.", "title": "" }, { "docid": "neg:1840294_11", "text": "Fetus-in-fetu (FIF) is a rare congenital condition in which a fetiform mass is detected in the host abdomen and also in other sites such as the intracranium, thorax, head, and neck. This condition has been rarely reported in the literature. Herein, we report the case of a fetus presenting with abdominal cystic mass and ascites and prenatally diagnosed as meconium pseudocyst. Explorative laparotomy revealed an irregular fetiform mass in the retroperitoneum within a fluid-filled cyst. The mass contained intestinal tract, liver, pancreas, and finger. Fetal abdominal cystic mass has been identified in a broad spectrum of diseases. However, as in our case, FIF is often overlooked during differential diagnosis. FIF should also be differentiated from other conditions associated with fetal abdominal masses.", "title": "" }, { "docid": "neg:1840294_12", "text": "Infrared (IR) guided missiles remain a threat to both military and civilian aircraft, and as such, the development of effective countermeasures against this threat remains vital. A simulation has been developed to assess the effectiveness of a jammer signal against a conical-scan seeker by testing critical jammer parameters. The critical parameters of a jammer signal are the jam-to-signal (J/S) ratio, the jammer frequency and the jammer duty cycle. It was found that the most effective jammer signal is one with a modulated envelope.", "title": "" }, { "docid": "neg:1840294_13", "text": "Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones. However, these methods are computationally wasteful in attempt to predict 3D shapes, where information is rich only on the surfaces. In this paper, we propose a novel 3D generative modeling framework to efficiently generate object shapes in the form of dense point clouds. We use 2D convolutional operations to predict the 3D structure from multiple viewpoints and jointly apply geometric reasoning with 2D projection optimization. We introduce the pseudo-renderer, a differentiable module to approximate the true rendering operation, to synthesize novel depth maps for optimization. Experimental results for single-image 3D object reconstruction tasks show that we outperforms state-of-the-art methods in terms of shape similarity and prediction density.", "title": "" }, { "docid": "neg:1840294_14", "text": "Transcript-based annotation and pedigree analysis are two basic steps in the computational analysis of whole-exome sequencing experiments in genetic diagnostics and disease-gene discovery projects. Here, we present Jannovar, a stand-alone Java application as well as a Java library designed to be used in larger software frameworks for exome and genome analysis. Jannovar uses an interval tree to identify all transcripts affected by a given variant, and provides Human Genome Variation Society-compliant annotations both for variants affecting coding sequences and splice junctions as well as untranslated regions and noncoding RNA transcripts. Jannovar can also perform family-based pedigree analysis with Variant Call Format (VCF) files with data from members of a family segregating a Mendelian disorder. Using a desktop computer, Jannovar requires a few seconds to annotate a typical VCF file with exome data. Jannovar is freely available under the BSD2 license. Source code as well as the Java application and library file can be downloaded from http://compbio.charite.de (with tutorial) and https://github.com/charite/jannovar.", "title": "" }, { "docid": "neg:1840294_15", "text": "Declarative models play an important role in most software design activities, by allowing designs to be constructed that selectively abstract over complex implementation details. In the user interface setting, Model-Based User Interface Development Environments (MB-UIDEs) provide a context within which declarative models can be constructed and related, as part of the interface design process. However, such declarative models are not usually directly executable, and may be difficult to relate to existing software components. It is therefore important that MB-UIDEs both fit in well with existing software architectures and standards, and provide an effective route from declarative interface specification to running user interfaces. This paper describes how user interface software is generated from declarative descriptions in the Teallach MB-UIDE. Distinctive features of Teallach include its open architecture, which connects directly to existing applications and widget sets, and the generation of executable interface applications in Java. This paper focuses on how Java programs, organized using the model-view-controller pattern (MVC), are generated from the task, domain and presentation models of Teallach.", "title": "" }, { "docid": "neg:1840294_16", "text": "As most regular readers of this TRANSACTIONS know, the development of digital signal processing techniques for applications involving image or picture data has been an increasingly active research area for the past decade. Collectively, t h s work is normally characterized under the generic heading “digital image processing.” Interestingly, the two books under review here share this heading as their title. Both are quite ambitious undertakings in that they attempt to integrate contributions from many disciplines (classical systems theory, digital signal processing, computer science, statistical communications, etc.) into unified, comprehensive presentations. In this regard it can be said that both are to some extent successful, although in quite different ways. Why the unusual step of a joint review? A brief overview of the two books reveals that they share not only a common title, but also similar objectives/purposes, intended audiences, structural organizations, and lists of topics considered. A more careful study reveals that substantial differences do exist, however, in the style and depth of subject treatment (as reflected in the difference in their lengths). Given their almost simultaneous publication, it seems appropriate to discuss these similarities/differences in a common setting. After much forethought (and two drafts), the reviewer decided to structure this review by describing the general topical material in their (joint) major sections, with supplementary comments directed toward the individual texts. It is hoped that this will provide the reader with a brief survey of the books’ contents and some flavor of their contrasting approaches. To avoid the identity problems of the joint title, each book will be subsequently referred to using the respective authors’ names: Gonzalez/Wintz and Pratt. Subjects will be correlated with chapter number(s) and approximate l ngth of coverage.", "title": "" }, { "docid": "neg:1840294_17", "text": "The diaphragm is the primary muscle involved in breathing and other non-primarily respiratory functions such as the maintenance of correct posture and lumbar and sacroiliac movement. It intervenes to facilitate cleaning of the upper airways through coughing, facilitates the evacuation of the intestines, and promotes the redistribution of the body's blood. The diaphragm also has the ability to affect the perception of pain and the emotional state of the patient, functions that are the subject of this article. The aim of this article is to gather for the first time, within a single text, information on the nonrespiratory functions of the diaphragm muscle and its analgesic and emotional response functions. It also aims to highlight and reflect on the fact that when the diaphragm is treated manually, a daily occurrence for manual operators, it is not just an area of musculature that is treated but the entire body, including the psyche. This reflection allows for a multidisciplinary approach to the diaphragm and the collaboration of various medical and nonmedical practitioners, with the ultimate goal of regaining or improving the patient's physical and mental well-being.", "title": "" }, { "docid": "neg:1840294_18", "text": "This paper describes a method for vision-based unmanned aerial vehicle (UAV) motion estimation from multiple planar homographies. The paper also describes the determination of the relative displacement between different UAVs employing techniques for blob feature extraction and matching. It then presents and shows experimental results of the application of the proposed technique to multi-UAV detection of forest fires", "title": "" }, { "docid": "neg:1840294_19", "text": "We propose a 600 GHz data transmission of high definition television using the combination of a photonic emission using an uni-travelling carrier photodiode and an electronic detection, featuring a very low power at the receiver. Only 10 nW of THz power at 600GHz were sufficient to ensure real-time error-free operation. This combination of photonics at emission and heterodyne detection lead to achieve THz wireless links with a safe level of electromagnetic exposure.", "title": "" } ]
1840295
WHUIRGroup at TREC 2016 Clinical Decision Support Task
[ { "docid": "pos:1840295_0", "text": "The goal of TREC 2015 Clinical Decision Support Track was to retrieve biomedical articles relevant for answering three kinds of generic clinical questions, namely diagnosis, test, and treatment. In order to achieve this purpose, we investigated three approaches to improve the retrieval of relevant articles: modifying queries, improving indexes, and ranking with ensembles. Our final submissions were a combination of several different configurations of these approaches. Our system mainly focused on the summary fields of medical reports. We built two different kinds of indexes – an inverted index on the free text and a second kind of indexes on the Unified Medical Language System (UMLS) concepts within the entire articles that were recognized by MetaMap. We studied the variations of including UMLS concepts at paragraph and sentence level and experimented with different thresholds of MetaMap matching scores to filter UMLS concepts. The query modification process in our system involved automatic query construction, pseudo relevance feedback, and manual inputs from domain experts. Furthermore, we trained a re-ranking sub-system based on the results of TREC 2014 Clinical Decision Support track using Indri’s Learning to Rank package, RankLib. Our experiments showed that the ensemble approach could improve the overall results by boosting the ranking of articles that are near the top of several single ranked lists.", "title": "" } ]
[ { "docid": "neg:1840295_0", "text": "We propose to automatically create capsule wardrobes. Given an inventory of candidate garments and accessories, the algorithm must assemble a minimal set of items that provides maximal mix-and-match outfits. We pose the task as a subset selection problem. To permit efficient subset selection over the space of all outfit combinations, we develop submodular objective functions capturing the key ingredients of visual compatibility, versatility, and user-specific preference. Since adding garments to a capsule only expands its possible outfits, we devise an iterative approach to allow near-optimal submodular function maximization. Finally, we present an unsupervised approach to learn visual compatibility from \"in the wild\" full body outfit photos; the compatibility metric translates well to cleaner catalog photos and improves over existing methods. Our results on thousands of pieces from popular fashion websites show that automatic capsule creation has potential to mimic skilled fashionistas in assembling flexible wardrobes, while being significantly more scalable.", "title": "" }, { "docid": "neg:1840295_1", "text": "Thomson coil actuators (also known as repulsion coil actuators) are well suited for vacuum circuit breakers when fast operation is desired such as for hybrid AC and DC circuit breaker applications. This paper presents investigations on how the actuator drive circuit configurations as well as their discharging pulse patterns affect the magnetic force and therefore the acceleration, as well as the mechanical robustness of these actuators. Comprehensive multi-physics finite-element simulations of the Thomson coil actuated fast mechanical switch are carried out to study the operation transients and how to maximize the actuation speed. Different drive circuits are compared: three single switch circuits are evaluated; the pulse pattern of a typical pulse forming network circuit is studied, concerning both actuation speed and maximum stress; a two stage drive circuit is also investigated. A 630 A, 15 kV / 1 ms prototype employing a vacuum interrupter with 6 mm maximum open gap was developed and tested. The total moving mass accelerated by the actuator is about 1.2 kg. The measured results match well with simulated results in the FEA study.", "title": "" }, { "docid": "neg:1840295_2", "text": "This paper presents the first complete 2.5 V, 77 GHz chipset for Doppler radar and imaging applications fabricated in 0.13 mum SiGe HBT technology. The chipset includes a voltage-controlled oscillator with -101.6 dBc/Hz phase noise at 1 MHz offset, an 25 dB gain low-noise amplifier, a novel low-voltage double-balanced Gilbert-cell mixer with two mm-wave baluns and IF amplifier achieving 12.8 dB noise figure and an OP1dB of +5 dBm, a 99 GHz static frequency divider consuming a record low 75 mW, and a power amplifier with 19 dB gain, +14.4 dBm saturated power, and 15.7% PAE. Monolithic spiral inductors and transformers result in the lowest reported 77 GHz receiver core area of only 0.45 mm times 0.30 mm. Simplified circuit topologies allow 77 GHz operation up to 125degC from 2.5 V/1.8 V supplies. Technology splits of the SiGe HBTs are employed to determine the optimum HBT profile for mm-wave performance.", "title": "" }, { "docid": "neg:1840295_3", "text": "In this paper, we demonstrate our Img2UML system tool. This system tool eliminates the gap between pixel-based diagram and engineering model, that it supports the extraction of the UML class model from images and produces an XMI file of the UML model. In addition to this, Img2UML offers a repository of UML class models of images that have been collected from the Internet. This project has both industrial and academic aims: for industry, this tool proposals a method that enables the updating of software design documentation (that typically contains UML images). For academia, this system unlocks a corpus of UML models that are publicly available, but not easily analyzable for scientific studies.", "title": "" }, { "docid": "neg:1840295_4", "text": "Untethered robots miniaturized to the length scale of millimeter and below attract growing attention for the prospect of transforming many aspects of health care and bioengineering. As the robot size goes down to the order of a single cell, previously inaccessible body sites would become available for high-resolution in situ and in vivo manipulations. This unprecedented direct access would enable an extensive range of minimally invasive medical operations. Here, we provide a comprehensive review of the current advances in biomedical untethered mobile milli/microrobots. We put a special emphasis on the potential impacts of biomedical microrobots in the near future. Finally, we discuss the existing challenges and emerging concepts associated with designing such a miniaturized robot for operation inside a biological environment for biomedical applications.", "title": "" }, { "docid": "neg:1840295_5", "text": "Social media have been adopted by many businesses. More and more companies are using social media tools such as Facebook and Twitter to provide various services and interact with customers. As a result, a large amount of user-generated content is freely available on social media sites. To increase competitive advantage and effectively assess the competitive environment of businesses, companies need to monitor and analyze not only the customer-generated content on their own social media sites, but also the textual information on their competitors’ social media sites. In an effort to help companies understand how to perform a social media competitive analysis and transform social media data into knowledge for decision makers and e-marketers, this paper describes an in-depth case study which applies text mining to analyze unstructured text content on Facebook and Twitter sites of the three largest pizza chains: Pizza Hut,", "title": "" }, { "docid": "neg:1840295_6", "text": "Geolocation prediction is vital to geospatial applications like localised search and local event detection. Predominately, social media geolocation models are based on full text data, including common words with no geospatial dimension (e.g. today) and noisy strings (tmrw), potentially hampering prediction and leading to slower/more memory-intensive models. In this paper, we focus on finding location indicative words (LIWs) via feature selection, and establishing whether the reduced feature set boosts geolocation accuracy. Our results show that an information gain ratiobased approach surpasses other methods at LIW selection, outperforming state-of-the-art geolocation prediction methods by 10.6% in accuracy and reducing the mean and median of prediction error distance by 45km and 209km, respectively, on a public dataset. We further formulate notions of prediction confidence, and demonstrate that performance is even higher in cases where our model is more confident, striking a trade-off between accuracy and coverage. Finally, the identified LIWs reveal regional language differences, which could be potentially useful for lexicographers.", "title": "" }, { "docid": "neg:1840295_7", "text": "TYAs paper looks at some of the algorithms that can be used for effective detection and tracking of vehicles, in particular for statistical analysis. The main methods for tracking discussed and implemented are blob analysis, optical flow and foreground detection. A further analysis is also done testing two of the techniques using a number of video sequences that include different levels of difficulties.", "title": "" }, { "docid": "neg:1840295_8", "text": "This paper studies the use of tree edit distance for pattern matching of abstract syntax trees of images generated with tree picture grammars. This was done with a view to measuring its effectiveness in determining image similarity, when compared to current state of the art similarity measures used in Content Based Image Retrieval (CBIR). Eight computer based similarity measures were selected for their diverse methodology and effectiveness. The eight visual descriptors and tree edit distance were tested against some of the images from our corpus of thousands of syntactically generated images. The first and second sets of experiments showed that tree edit distance and Spacial Colour Distribution (SpCD) are the most suited for determining similarity of syntactically generated images. A third set of experiments was performed with tree edit distance and SpCD only. Results obtained showed that while both of them performed well in determining similarity of the generated images, the tree edit distance is better able to detect more subtle human observable image differences than SpCD. Also, tree edit distance more closely models the generative sequence of these tree picture grammars.", "title": "" }, { "docid": "neg:1840295_9", "text": "A CMOS low-dropout regulator (LDO) with 3.3 V output voltage and 100 mA output current for system-on-chip applications is presented. The proposed LDO is independent of off-chip capacitor, thus the board space and external pins are reduced. By utilizing dynamic slew-rate enhancement (SRE) circuit and nested Miller compensation (NMC) on LDO structure, the proposed LDO provides high stability during line and load regulation without off-chip load capacitor. The overshot voltage has been limited within 550 mV and settling time is less than 50 mus when load current reducing from 100 mA to 1 mA. By using 30 nA reference current, the quiescent current is 3.3 muA. The experiment results agree with the simulation results. The proposed design is implemented by CSMC 0.5 mum mixed-signal process.", "title": "" }, { "docid": "neg:1840295_10", "text": "As a kernel function in network routers, packet classification requires the incoming packet headers to be checked against a set of predefined rules. There are two trends for packet classification: (1) to examine a large number of packet header fields, and (2) to use software-based solutions on multi-core general purpose processors and virtual machines. Although packet classification has been widely studied, most existing solutions on multi-core systems target the classic 5-field packet classification; it is not easy to scale up their performance with respect to the number of packet header fields. In this work, we present a decomposition-based packet classification approach; it supports large rule sets consisting of a large number of packet header fields. In our approach, range-tree and hashing are used to search the fields of the input packet header in parallel. The partial results from all the fields are represented in rule ID sets; they are merged efficiently to produce the final match result. We implement our approach and evaluate its performance with respect to overall throughput and processing latency for rule set size varying from 1 to 32 K. Experimental results on state-of-the-art 16-core platforms show that, an overall throughput of 48 million packets per second and a processing latency of 2,000 ns per packet can be achieved for a 32 K rule set.", "title": "" }, { "docid": "neg:1840295_11", "text": "In this paper, we propose a joint training approach to voice activity detection (VAD) to address the issue of performance degradation due to unseen noise conditions. Two key techniques are integrated into this deep neural network (DNN) based VAD framework. First, a regression DNN is trained to map the noisy to clean speech features similar to DNN-based speech enhancement. Second, the VAD part to discriminate speech against noise backgrounds is also a DNN trained with a large amount of diversified noisy data synthesized by a wide range of additive noise types. By stacking the classification DNN on top of the enhancement DNN, this integrated DNN can be jointly trained to perform VAD. The feature mapping DNN serves as a noise normalization module aiming at explicitly generating the “clean” features which are easier to be correctly recognized by the following classification DNN. Our experiment results demonstrate the proposed noise-universal DNNbased VAD algorithm achieves a good generalization capacity to unseen noises, and the jointly trained DNNs consistently and significantly outperform the conventional classification-based DNN for all the noise types and signal-to-noise levels tested.", "title": "" }, { "docid": "neg:1840295_12", "text": "Distributed artificial intelligence (DAI) is a subfield of artificial intelligence that deals with interactions of intelligent agents. Precisely, DAI attempts to construct intelligent agents that make decisions that allow them to achieve their goals in a world populated by other intelligent agents with their own goals. This paper discusses major concepts used in DAI today. To do this, a taxonomy of DAI is presented, based on the social abilities of an individual agent, the organization of agents, and the dynamics of this organization through time. Social abilities are characterized by the reasoning about other agents and the assessment of a distributed situation. Organization depends on the degree of cooperation and on the paradigm of communication. Finally, the dynamics of organization is characterized by the global coherence of the group and the coordination between agents. A reasonably representative review of recent work done in DAI field is also supplied in order to provide a better appreciation of this vibrant AI field. The paper concludes with important issues in which further research in DAI is needed.", "title": "" }, { "docid": "neg:1840295_13", "text": "This paper will serve as an introduction to the body of work on robust subspace recovery. Robust subspace recovery involves finding an underlying low-dimensional subspace in a data set that is possibly corrupted with outliers. While this problem is easy to state, it has been difficult to develop optimal algorithms due to its underlying nonconvexity. This work emphasizes advantages and disadvantages of proposed approaches and unsolved problems in the area.", "title": "" }, { "docid": "neg:1840295_14", "text": "Grounded language learning bridges words like ‘red’ and ‘square’ with robot perception. The vast majority of existing work in this space limits robot perception to vision. In this paper, we build perceptual models that use haptic, auditory, and proprioceptive data acquired through robot exploratory behaviors to go beyond vision. Our system learns to ground natural language words describing objects using supervision from an interactive humanrobot “I Spy” game. In this game, the human and robot take turns describing one object among several, then trying to guess which object the other has described. All supervision labels were gathered from human participants physically present to play this game with a robot. We demonstrate that our multi-modal system for grounding natural language outperforms a traditional, vision-only grounding framework by comparing the two on the “I Spy” task. We also provide a qualitative analysis of the groundings learned in the game, visualizing what words are understood better with multi-modal sensory information as well as identifying learned word meanings that correlate with physical object properties (e.g. ‘small’ negatively correlates with object weight).", "title": "" }, { "docid": "neg:1840295_15", "text": "In this paper we present a filter algorithm for nonlinear programming and prove its global convergence to stationary points. Each iteration is composed of a feasibility phase, which reduces a measure of infeasibility, and an optimality phase, which reduces the objective function in a tangential approximation of the feasible set. These two phases are totally independent, and the only coupling between them is provided by the filter. The method is independent of the internal algorithms used in each iteration, as long as these algorithms satisfy reasonable assumptions on their efficiency. Under standard hypotheses, we show two results: for a filter with minimum size, the algorithm generates a stationary accumulation point; for a slightly larger filter, all accumulation points are stationary.", "title": "" }, { "docid": "neg:1840295_16", "text": "In this paper, we intend to propose a new heuristic optimization method, called animal migration optimization algorithm. This algorithm is inspired by the animal migration behavior, which is a ubiquitous phenomenon that can be found in all major animal groups, such as birds, mammals, fish, reptiles, amphibians, insects, and crustaceans. In our algorithm, there are mainly two processes. In the first process, the algorithm simulates how the groups of animals move from the current position to the new position. During this process, each individual should obey three main rules. In the latter process, the algorithm simulates how some animals leave the group and some join the group during the migration. In order to verify the performance of our approach, 23 benchmark functions are employed. The proposed method has been compared with other well-known heuristic search methods. Experimental results indicate that the proposed algorithm performs better than or at least comparable with state-of-the-art approaches from literature when considering the quality of the solution obtained.", "title": "" }, { "docid": "neg:1840295_17", "text": "AIMS\nTo investigated the association between the ABO blood group and gestational diabetes mellitus (GDM).\n\n\nMATERIALS AND METHODS\nA retrospective case-control study was conducted using data from 5424 Japanese pregnancies. GDM screening was performed in the first trimester using a casual blood glucose test and in the second trimester using a 50-g glucose challenge test. If the screening was positive, a 75-g oral glucose tolerance test was performed for a GDM diagnosis, which was defined according to the International Association of Diabetes and Pregnancy Study Groups. Logistic regression was used to obtain the odds ratio (OR) and 95% confidence interval (CI) adjusted for traditional risk factors.\n\n\nRESULTS\nWomen with the A blood group (adjusted OR: 0.34, 95% CI: 0.19-0.63), B (adjusted OR: 0.35, 95% CI: 0.18-0.68), or O (adjusted OR: 0.39, 95% CI: 0.21-0.74) were at decreased risk of GDM compared with those with group AB. Women with the AB group were associated with increased risk of GDM as compared with those with A, B, or O (adjusted OR: 2.73, 95% CI: 1.64-4.57).\n\n\nCONCLUSION\nABO blood groups are associated with GDM, and group AB was a risk factor for GDM in Japanese population.", "title": "" }, { "docid": "neg:1840295_18", "text": "Study Objective: To determine the prevalence of vulvovaginitis, predisposing factors, microbial etiology and therapy in patients treated at the Hospital del Niño DIF, Pachuca, Hidalgo, Mexico. Design. This was an observational and descriptive study from 2006 to 2009. Setting: Hospital del Niño DIF, Pachuca, Hidalgo, Mexico. Participants. Patients from 0 to 16 years, with vulvovaginitis and/or vaginal discharge were included. Interventions: None. Main Outcome Measures: Demographic data, etiology, clinical features, risk factors and therapy were analyzed. Results: Four hundred twenty seven patients with diagnosis of vulvovaginitis were included. The average prevalence to 4 years in the study period was 0.19%. The age group most affected was schoolchildren (225 cases: 52.69%). The main signs and symptoms presented were leucorrhea (99.3%), vaginal hyperemia (32.6%), vulvar itching (32.1%) and erythema (28.8%). Identified risk factors were poor hygiene (15.7%), urinary tract infection (14.7%), intestinal parasites (5.6%) and obesity or overweight (3.3%). The main microorganisms found in vaginal cultures were enterobacteriaceae (Escherichia coli, Klebsiella and Enterococcus faecalis), Staphylococcus spp, and Gardnerella vaginalis. Several inconsistent were found in the drug prescription of the patients. Conclusion: Vulvovaginitis prevalence in Mexican girls is low and this was caused mainly by opportunist microorganisms. The initial treatment of vulvovaginitis must include hygienic measure and an antimicrobial according to the clinical features and microorganism found.", "title": "" }, { "docid": "neg:1840295_19", "text": "In this paper, a review of the authors' work on inkjet-printed flexible antennas, fabricated on paper substrates, is given. This is presented as a system-level solution for ultra-low-cost mass production of UHF radio-frequency identification (RFID) tags and wireless sensor nodes (WSN), in an approach that could be easily extended to other microwave and wireless applications. First, we discuss the benefits of using paper as a substrate for high-frequency applications, reporting its very good electrical/dielectric performance up to at least 1 GHz. The RF characteristics of the paper-based substrate are studied by using a microstrip-ring resonator, in order to characterize the dielectric properties (dielectric constant and loss tangent). We then give details about the inkjet-printing technology, including the characterization of the conductive ink, which consists of nano-silver particles. We highlight the importance of this technology as a fast and simple fabrication technique, especially on flexible organic (e.g., LCP) or paper-based substrates. A compact inkjet-printed UHF ldquopassive RFIDrdquo antenna, using the classic T-match approach and designed to match the IC's complex impedance, is presented as a demonstration prototype for this technology. In addition, we briefly touch upon the state-of-the-art area of fully-integrated wireless sensor modules on paper. We show the first-ever two-dimensional sensor integration with an RFID tag module on paper, as well as the possibility of a three-dimensional multilayer paper-based RF/microwave structure.", "title": "" } ]
1840296
Stability of cyberbullying victimization among adolescents: Prevalence and association with bully-victim status and psychosocial adjustment
[ { "docid": "pos:1840296_0", "text": "In this study, a questionnaire (Cyberbullying Questionnaire, CBQ) was developed to assess the prevalence of numerous modalities of cyberbullying (CB) in adolescents. The association of CB with the use of other forms of violence, exposure to violence, acceptance and rejection by peers was also examined. In the study, participants were 1431 adolescents, aged between 12 and17 years (726 girls and 682 boys). The adolescents responded to the CBQ, measures of reactive and proactive aggression, exposure to violence, justification of the use of violence, and perceived social support of peers. Sociometric measures were also used to assess the use of direct and relational aggression and the degree of acceptance and rejection by peers. The results revealed excellent psychometric properties for the CBQ. Of the adolescents, 44.1% responded affirmatively to at least one act of CB. Boys used CB to greater extent than girls. Lastly, CB was significantly associated with the use of proactive aggression, justification of violence, exposure to violence, and less perceived social support of friends. 2010 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "neg:1840296_0", "text": "A promising approach to learn to play board games is to use reinforcement learning algorithms that can learn a game position evaluation function. In this paper we examine and compare three different methods for generating training games: (1) Learning by self-play, (2) Learning by playing against an expert program, and (3) Learning from viewing experts play against themselves. Although the third possibility generates highquality games from the start compared to initial random games generated by self-play, the drawback is that the learning program is never allowed to test moves which it prefers. We compared these three methods using temporal difference methods to learn the game of backgammon. For particular games such as draughts and chess, learning from a large database containing games played by human experts has as a large advantage that during the generation of (useful) training games, no expensive lookahead planning is necessary for move selection. Experimental results in this paper show how useful this method is for learning to play chess and draughts.", "title": "" }, { "docid": "neg:1840296_1", "text": "This paper proposes to use probabilistic model checking to synthesize optimal robot policies in multi-tasking autonomous systems that are subject to human-robot interaction. Given the convincing empirical evidence that human behavior can be related to reinforcement models, we take as input a well-studied Q-table model of the human behavior for flexible scenarios. We first describe an automated procedure to distill a Markov decision process (MDP) for the human in an arbitrary but fixed scenario. The distinctive issue is that – in contrast to existing models – under-specification of the human behavior is included. Probabilistic model checking is used to predict the human’s behavior. Finally, the MDP model is extended with a robot model. Optimal robot policies are synthesized by analyzing the resulting two-player stochastic game. Experimental results with a prototypical implementation using PRISM show promising results.", "title": "" }, { "docid": "neg:1840296_2", "text": "Commodity depth cameras have created many interesting new applications in the research community recently. These applications often require the calibration information between the color and the depth cameras. Traditional checkerboard based calibration schemes fail to work well for the depth camera, since its corner features cannot be reliably detected in the depth image. In this paper, we present a maximum likelihood solution for the joint depth and color calibration based on two principles. First, in the depth image, points on the checker-board shall be co-planar, and the plane is known from color camera calibration. Second, additional point correspondences between the depth and color images may be manually specified or automatically established to help improve calibration accuracy. Uncertainty in depth values has been taken into account systematically. The proposed algorithm is reliable and accurate, as demonstrated by extensive experimental results on simulated and real-world examples.", "title": "" }, { "docid": "neg:1840296_3", "text": "*Correspondence: Enrico Di Minin, Finnish Centre of Excellence in Metapopulation Biology, Department of Biosciences, Biocenter 3, University of Helsinki, PO Box 65 (Viikinkaari 1), 00014 Helsinki, Finland; School of Life Sciences, Westville Campus, University of KwaZulu-Natal, PO Box 54001 (University Road), Durban 4000, South Africa enrico.di.minin@helsinki.fi; Tuuli Toivonen, Finnish Centre of Excellence in Metapopulation Biology, Department of Biosciences, Biocenter 3, University of Helsinki, PO Box 65 (Viikinkaari 1), 00014 Helsinki, Finland; Department of Geosciences and Geography, University of Helsinki, PO Box 64 (Gustaf Hällströminkatu 2a), 00014 Helsinki, Finland tuuli.toivonen@helsinki.fi These authors have contributed equally to this work.", "title": "" }, { "docid": "neg:1840296_4", "text": "What does it mean for a machine learning model to be ‘fair’, in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise ‘fairness’ in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.", "title": "" }, { "docid": "neg:1840296_5", "text": "Optimal control theory is a mature mathematical discipline with numerous applications in both science and engineering. It is emerging as the computational framework of choice for studying the neural control of movement, in much the same way that probabilistic inference is emerging as the computational framework of choice for studying sensory information processing. Despite the growing popularity of optimal control models, however, the elaborate mathematical machinery behind them is rarely exposed and the big picture is hard to grasp without reading a few technical books on the subject. While this chapter cannot replace such books, it aims to provide a self-contained mathematical introduction to optimal control theory that is su¢ ciently broad and yet su¢ ciently detailed when it comes to key concepts. The text is not tailored to the …eld of motor control (apart from the last section, and the overall emphasis on systems with continuous state) so it will hopefully be of interest to a wider audience. Of special interest in the context of this book is the material on the duality of optimal control and probabilistic inference; such duality suggests that neural information processing in sensory and motor areas may be more similar than currently thought. The chapter is organized in the following sections:", "title": "" }, { "docid": "neg:1840296_6", "text": "The output voltage ripple is one of the most significant system parameters in switch-mode power supplies. This ripple degrades the performance of application specific integrated circuits (ASICs). The most common way to reduce it is to use additional integrated low drop-out regulators (LDO) on the ASIC. This technique usually suffers from high system efficiency as it is required for portable electronic systems. It also increases the design challenges of on-chip power management circuits and area required for the LDOs. This work presents a low-power fully integrated 0.97mm2 DC-DC Buck converter with a tuned series LDO with 1mV voltage ripple in a 0.25μm BiCMOS process. The converter prodives a power supply rejection ratio of more than 60 dB from 1 to 6MHz and a load current range of 0...400 mA. A peak efficiency of 93.7% has been measured. For high light load efficiency, automatic mode operation is implemented. To decrease the form factor and costs, the external components count has been reduced to a single inductor of 1 μH and two external capacitors of 2 μF each.", "title": "" }, { "docid": "neg:1840296_7", "text": "We propose a general method called truncated gradient to induce sparsity in the weights of online learning algorithms with convex loss functions. This method has several essential properties: 1. The degree of sparsity is continuous a parameter controls the rate of sparsi cation from no sparsi cation to total sparsi cation. 2. The approach is theoretically motivated, and an instance of it can be regarded as an online counterpart of the popular L1-regularization method in the batch setting. We prove that small rates of sparsi cation result in only small additional regret with respect to typical online learning guarantees. 3. The approach works well empirically. We apply the approach to several datasets and nd that for datasets with large numbers of features, substantial sparsity is discoverable.", "title": "" }, { "docid": "neg:1840296_8", "text": "Memory corruption vulnerabilities are the root cause of many modern attacks. Existing defense mechanisms are inadequate; in general, the software-based approaches are not efficient and the hardware-based approaches are not flexible. In this paper, we present hardware-assisted data-flow isolation, or, HDFI, a new fine-grained data isolation mechanism that is broadly applicable and very efficient. HDFI enforces isolation at the machine word granularity by virtually extending each memory unit with an additional tag that is defined by dataflow. This capability allows HDFI to enforce a variety of security models such as the Biba Integrity Model and the Bell -- LaPadula Model. We implemented HDFI by extending the RISC-V instruction set architecture (ISA) and instantiating it on the Xilinx Zynq ZC706 evaluation board. We ran several benchmarks including the SPEC CINT 2000 benchmark suite. Evaluation results show that the performance overhead caused by our modification to the hardware is low (<; 2%). We also developed or ported several security mechanisms to leverage HDFI, including stack protection, standard library enhancement, virtual function table protection, code pointer protection, kernel data protection, and information leak prevention. Our results show that HDFI is easy to use, imposes low performance overhead, and allows us to create more elegant and more secure solutions.", "title": "" }, { "docid": "neg:1840296_9", "text": "We study the algorithmics of information structure design --- a.k.a. persuasion or signaling --- in a fundamental special case introduced by Arieli and Babichenko: multiple agents, binary actions, and no inter-agent externalities. Unlike prior work on this model, we allow many states of nature. We assume that the principal's objective is a monotone set function, and study the problem both in the public signal and private signal models, drawing a sharp contrast between the two in terms of both efficacy and computational complexity.\n When private signals are allowed, our results are largely positive and quite general. First, we use linear programming duality and the equivalence of separation and optimization to show polynomial-time equivalence between (exactly) optimal signaling and the problem of maximizing the objective function plus an additive function. This yields an efficient implementation of the optimal scheme when the objective is supermodular or anonymous. Second, we exhibit a (1-1/e)-approximation of the optimal private signaling scheme, modulo an additive loss of ε, when the objective function is submodular. These two results simplify, unify, and generalize results of [Arieli and Babichenko, 2016] and [Babichenko and Barman, 2016], extending them from a binary state of nature to many states (modulo the additive loss in the latter result). Third, we consider the binary-state case with a submodular objective, and simplify and slightly strengthen the result of [Babichenko and Barman, 2016] to obtain a (1-1/e)-approximation via a scheme which (i) signals independently to each receiver and (ii) is \"oblivious\" in that it does not depend on the objective function so long as it is monotone submodular.\n When only a public signal is allowed, our results are negative. First, we show that it is NP-hard to approximate the optimal public scheme, within any constant factor, even when the objective is additive. Second, we show that the optimal private scheme can outperform the optimal public scheme, in terms of maximizing the sender's objective, by a polynomial factor.", "title": "" }, { "docid": "neg:1840296_10", "text": "A 32-KB standard CMOS antifuse one-time programmable (OTP) ROM embedded in a 16-bit microcontroller as its program memory is designed and implemented in 0.18-mum standard CMOS technology. The proposed 32-KB OTP ROM cell array consists of 4.2 mum2 three-transistor (3T) OTP cells where each cell utilizes a thin gate-oxide antifuse, a high-voltage blocking transistor, and an access transistor, which are all compatible with standard CMOS process. In order for high density implementation, the size of the 3T cell has been reduced by 80% in comparison to previous work. The fabricated total chip size, including 32-KB OTP ROM, which can be programmed via external I 2C master device such as universal I2C serial EEPROM programmer, 16-bit microcontroller with 16-KB program SRAM and 8-KB data SRAM, peripheral circuits to interface other system building blocks, and bonding pads, is 9.9 mm2. This paper describes the cell, design, and implementation of high-density CMOS OTP ROM, and shows its promising possibilities in embedded applications", "title": "" }, { "docid": "neg:1840296_11", "text": "In the task of Object Recognition, there exists a dichotomy between the categorization of objects and estimating object pose, where the former necessitates a view-invariant representation, while the latter requires a representation capable of capturing pose information over different categories of objects. With the rise of deep architectures, the prime focus has been on object category recognition. Deep learning methods have achieved wide success in this task. In contrast, object pose regression using these approaches has received relatively much less attention. In this paper we show how deep architectures, specifically Convolutional Neural Networks (CNN), can be adapted to the task of simultaneous categorization and pose estimation of objects. We investigate and analyze the layers of various CNN models and extensively compare between them with the goal of discovering how the layers of distributed representations of CNNs represent object pose information and how this contradicts object category representations. We extensively experiment on two recent large and challenging multi-view datasets. Our models achieve better than state-of-the-art performance on both datasets.", "title": "" }, { "docid": "neg:1840296_12", "text": "In order to summarize the status of rescue robotics, this chapter will cover the basic characteristics of disasters and their impact on robotic design, describe the robots actually used in disasters to date, promising robot designs (e.g., snakes, legged locomotion) and concepts (e.g., robot teams or swarms, sensor networks), methods of evaluation in benchmarks for rescue robotics, and conclude with a discussion of the fundamental problems and open issues facing rescue robotics, and their evolution from an interesting idea to widespread adoption. The Chapter will concentrate on the rescue phase, not recovery, with the understanding that capabilities for rescue can be applied to, and extended for, the recovery phase. The use of robots in the prevention and preparedness phases of disaster management are outside the scope of this chapter.", "title": "" }, { "docid": "neg:1840296_13", "text": "This study highlights the changes in lycopene and β-carotene retention in tomato juice subjected to combined pressure-temperature (P-T) treatments ((high-pressure processing (HPP; 500-700 MPa, 30 °C), pressure-assisted thermal processing (PATP; 500-700 MPa, 100 °C), and thermal processing (TP; 0.1 MPa, 100 °C)) for up to 10 min. Processing treatments utilized raw (untreated) and hot break (∼93 °C, 60 s) tomato juice as controls. Changes in bioaccessibility of these carotenoids as a result of processing were also studied. Microscopy was applied to better understand processing-induced microscopic changes. TP did not alter the lycopene content of the tomato juice. HPP and PATP treatments resulted in up to 12% increases in lycopene extractability. all-trans-β-Carotene showed significant degradation (p < 0.05) as a function of pressure, temperature, and time. Its retention in processed samples varied between 60 and 95% of levels originally present in the control. Regardless of the processing conditions used, <0.5% lycopene appeared in the form of micelles (<0.5% bioaccessibility). Electron microscopy images showed more prominent lycopene crystals in HPP and PATP processed juice than in thermally processed juice. However, lycopene crystals did appear to be enveloped regardless of the processing conditions used. The processed juice (HPP, PATP, TP) showed significantly higher (p < 0.05) all-trans-β-carotene micellarization as compared to the raw unprocessed juice (control). Interestingly, hot break juice subjected to combined P-T treatments showed 15-30% more all-trans-β-carotene micellarization than the raw juice subjected to combined P-T treatments. This study demonstrates that combined pressure-heat treatments increase lycopene extractability. However, the in vitro bioaccessibility of carotenoids was not significantly different among the treatments (TP, PATP, HPP) investigated.", "title": "" }, { "docid": "neg:1840296_14", "text": "Document clustering is generally the first step for topic identification. Since many clustering methods operate on the similarities between documents, it is important to build representations of these documents which keep their semantics as much as possible and are also suitable for efficient similarity calculation. As we describe in Koopman et al. (Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference, Istanbul, Turkey, 29 June to 3 July, 2015. Bogaziçi University Printhouse. http://www.issi2015.org/files/downloads/all-papers/1042.pdf , 2015), the metadata of articles in the Astro dataset contribute to a semantic matrix, which uses a vector space to capture the semantics of entities derived from these articles and consequently supports the contextual exploration of these entities in LittleAriadne. However, this semantic matrix does not allow to calculate similarities between articles directly. In this paper, we will describe in detail how we build a semantic representation for an article from the entities that are associated with it. Base on such semantic representations of articles, we apply two standard clustering methods, K-Means and the Louvain community detection algorithm, which leads to our two clustering solutions labelled as OCLC-31 (standing for K-Means) and OCLC-Louvain (standing for Louvain). In this paper, we will give the implementation details and a basic comparison with other clustering solutions that are reported in this special issue.", "title": "" }, { "docid": "neg:1840296_15", "text": "Container technology has the potential to considerably simplify the management of the software stack of High Performance Computing (HPC) clusters. However, poor integration with established HPC technologies is still preventing users and administrators to reap the benefits of containers. Message Passing Interface (MPI) is a pervasive technology used to run scientific software, often written in Fortran and C/C++, that presents challenges for effective integration with containers. This work shows how an existing MPI implementation can be extended to improve this integration.", "title": "" }, { "docid": "neg:1840296_16", "text": "B-trees are used by many file systems to represent files and directories. They provide guaranteed logarithmic time key-search, insert, and remove. File systems like WAFL and ZFS use shadowing, or copy-on-write, to implement snapshots, crash recovery, write-batching, and RAID. Serious difficulties arise when trying to use b-trees and shadowing in a single system.\n This article is about a set of b-tree algorithms that respects shadowing, achieves good concurrency, and implements cloning (writeable snapshots). Our cloning algorithm is efficient and allows the creation of a large number of clones.\n We believe that using our b-trees would allow shadowing file systems to better scale their on-disk data structures.", "title": "" }, { "docid": "neg:1840296_17", "text": "In ancient times, people exchanged their goods and services to obtain what they needed (such as clothes and tools) from other people. This system of bartering compensated for the lack of currency. People offered goods/services and received in kind other goods/services. Now, despite the existence of multiple currencies and the progress of humanity from the Stone Age to the Byte Age, people still barter but in a different way. Mainly, people use money to pay for the goods they purchase and the services they obtain.", "title": "" }, { "docid": "neg:1840296_18", "text": "In this study, we propose a research model to assess the effect of a mobile health (mHealth) app on exercise motivation and physical activity of individuals based on the design and self-determination theory. The research model is formulated from the perspective of motivation affordance and gamification. We will discuss how the use of specific gamified features of the mHealth app can trigger/afford corresponding users’ exercise motivations, which further enhance users’ participation in physical activity. We propose two hypotheses to test the research model using a field experiment. We adopt a 3-phase longitudinal approach to collect data in three different time zones, in consistence with approach commonly adopted in psychology and physical activity research, so as to reduce the common method bias in testing the two hypotheses.", "title": "" } ]
1840297
A Tutorial on Deep Learning for Music Information Retrieval
[ { "docid": "pos:1840297_0", "text": "It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: 1) the slow gradient-based learning algorithms are extensively used to train neural networks, and 2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these traditional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses the input weights and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide the best generalization performance at extremely fast learning speed. The experimental results based on real-world benchmarking function approximation and classification problems including large complex applications show that the new algorithm can produce best generalization performance in some cases and can learn much faster than traditional popular learning algorithms for feedforward neural networks.", "title": "" }, { "docid": "pos:1840297_1", "text": "In this work we investigate the applicability of unsupervised feature learning methods to the task of automatic genre prediction of music pieces. More specifically we evaluate a framework that recently has been successfully used to recognize objects in images. We first extract local patches from the time-frequency transformed audio signal, which are then pre-processed and used for unsupervised learning of an overcomplete dictionary of local features. For learning we either use a bootstrapped k-means clustering approach or select features randomly. We further extract feature responses in a convolutional manner and train a linear SVM for classification. We extensively evaluate the approach on the GTZAN dataset, emphasizing the influence of important design choices such as dimensionality reduction, pooling and patch dimension on the classification accuracy. We show that convolutional extraction of local feature responses is crucial to reach high performance. Furthermore we find that using this approach, simple and fast learning techniques such as k-means or randomly selected features are competitive with previously published results which also learn features from audio signals.", "title": "" } ]
[ { "docid": "neg:1840297_0", "text": "We present a new local strategy to solve incremental learning tasks. Applied to Support Vector Machines based on local kernel, it allows to avoid re-learning of all the parameters by selecting a working subset where the incremental learning is performed. Automatic selection procedure is based on the estimation of generalization error by using theoretical bounds that involve the margin notion. Experimental simulation on three typical datasets of machine learning give promising results.", "title": "" }, { "docid": "neg:1840297_1", "text": "This paper presents a pattern division multiple access (PDMA) concept for cellular future radio access (FRA) towards the 2020s information society. Different from the current LTE radio access scheme (until Release 11), PDMA is a novel non-orthogonal multiple access technology based on the total optimization of multiple user communication system. It considers joint design from both transmitter and receiver. At the receiver, multiple users are detected by successive interference cancellation (SIC) detection method. Numerical results show that the PDMA system based on SIC improve the average sum rate of users over the orthogonal system with affordable complexity.", "title": "" }, { "docid": "neg:1840297_2", "text": "Functional magnetic resonance imaging was used to assess the cortical areas active during the observation of mouth actions performed by humans and by individuals belonging to other species (monkey and dog). Two types of actions were presented: biting and oral communicative actions (speech reading, lip-smacking, barking). As a control, static images of the same actions were shown. Observation of biting, regardless of the species of the individual performing the action, determined two activation foci (one rostral and one caudal) in the inferior parietal lobule and an activation of the pars opercularis of the inferior frontal gyrus and the adjacent ventral premotor cortex. The left rostral parietal focus (possibly BA 40) and the left premotor focus were very similar in all three conditions, while the right side foci were stronger during the observation of actions made by conspecifics. The observation of speech reading activated the left pars opercularis of the inferior frontal gyrus, the observation of lip-smacking activated a small focus in the pars opercularis bilaterally, and the observation of barking did not produce any activation in the frontal lobe. Observation of all types of mouth actions induced activation of extrastriate occipital areas. These results suggest that actions made by other individuals may be recognized through different mechanisms. Actions belonging to the motor repertoire of the observer (e.g., biting and speech reading) are mapped on the observer's motor system. Actions that do not belong to this repertoire (e.g., barking) are essentially recognized based on their visual properties. We propose that when the motor representation of the observed action is activated, the observer gains knowledge of the observed action in a personal perspective, while this perspective is lacking when there is no motor activation.", "title": "" }, { "docid": "neg:1840297_3", "text": "Linguistic research to date has determined many of the principles that govern the structure of the spatial schemas represented by closed-class forms across the world’s languages. contributing to this cumulative understanding have, for example, been Gruber 1965, Fillmore 1968, Leech 1969, Clark 1973, Bennett 1975, Herskovits 1982, Jackendoff 1983, Zubin and Svorou 1984, as well as myself, Talmy 1983, 2000a, 2000b). It is now feasible to integrate these principles and to determine the comprehensive system they belong to for spatial structuring in spoken language. The finding here is that this system has three main parts: the componential, the compositional, and the augmentive.", "title": "" }, { "docid": "neg:1840297_4", "text": "Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.", "title": "" }, { "docid": "neg:1840297_5", "text": "How much trust a user places in a recommender is crucial to the uptake of the recommendations. Although prior work established various factors that build and sustain user trust, their comparative impact has not been studied in depth. This paper presents the results of a crowdsourced study examining the impact of various recommendation interfaces and content selection strategies on user trust. It evaluates the subjective ranking of nine key factors of trust grouped into three dimensions and examines the differences observed with respect to users' personality traits.", "title": "" }, { "docid": "neg:1840297_6", "text": "Wearing comfort of clothing is dependent on air permeability, moisture absorbency and wicking properties of fabric, which are related to the porosity of fabric. In this work, a plug-in is developed using Python script and incorporated in Abaqus/CAE for the prediction of porosity of plain weft knitted fabrics. The Plug-in is able to automatically generate 3D solid and multifilament weft knitted fabric models and accurately determine the porosity of fabrics in two steps. In this work, plain weft knitted fabrics made of monofilament, multifilament and spun yarn made of staple fibers were used to evaluate the effectiveness of the developed plug-in. In the case of staple fiber yarn, intra yarn porosity was considered in the calculation of porosity. The first step is to develop a 3D geometrical model of plain weft knitted fabric and the second step is to calculate the porosity of the fabric by using the geometrical parameter of 3D weft knitted fabric model generated in step one. The predicted porosity of plain weft knitted fabric is extracted in the second step and is displayed in the message area. The predicted results obtained from the plug-in have been compared with the experimental results obtained from previously developed models; they agreed well.", "title": "" }, { "docid": "neg:1840297_7", "text": "Soft materials are being adopted in robotics in order to facilitate biomedical applications and in order to achieve simpler and more capable robots. One route to simplification is to design the robot's body using `smart materials' that carry the burden of control and actuation. Metamaterials enable just such rational design of the material properties. Here we present a soft robot that exploits mechanical metamaterials for the intrinsic synchronization of two passive clutches which contact its travel surface. Doing so allows it to move through an enclosed passage with an inchworm motion propelled by a single actuator. Our soft robot consists of two 3D-printed metamaterials that implement auxetic and normal elastic properties. The design, fabrication and characterization of the metamaterials are described. In addition, a working soft robot is presented. Since the synchronization mechanism is a feature of the robot's material body, we believe that the proposed design will enable compliant and robust implementations that scale well with miniaturization.", "title": "" }, { "docid": "neg:1840297_8", "text": "A novel rectifying circuit topology is proposed for converting electromagnetic pulse waves (PWs), that are collected by a wideband antenna, into dc voltage. The typical incident signal considered in this paper consists of 10-ns pulses modulated around 2.4 GHz with a repetition period of 100 ns. The proposed rectifying circuit topology comprises a double-current architecture with inductances that collect the energy during the pulse delivery as well as an output capacitance that maintains the dc output voltage between the pulses. Experimental results show that the efficiency of the rectifier reaches 64% for a mean available incident power of 4 dBm. Similar performances are achieved when a wideband antenna is combined with the rectifier in order to realize a rectenna. By increasing the repetition period of the incident PWs to 400 ns, the rectifier still operates with an efficiency of 52% for a mean available incident pulse power of −8 dBm. Finally, the proposed PW rectenna is tested for a wireless energy transmission application in a low- $Q$ cavity. The time reversal technique is applied to focus PWs around the desired rectenna. Results show that the rectenna is still efficient when noisy PW is handled.", "title": "" }, { "docid": "neg:1840297_9", "text": "Most modern hypervisors offer powerful resource control primitives such as reservations, limits, and shares for individual virtual machines (VMs). These primitives provide a means to dynamic vertical scaling of VMs in order for the virtual applications to meet their respective service level objectives (SLOs). VMware DRS offers an additional resource abstraction of a resource pool (RP) as a logical container representing an aggregate resource allocation for a collection of VMs. In spite of the abundant research on translating application performance goals to resource requirements, the implementation of VM vertical scaling techniques in commercial products remains limited. In addition, no prior research has studied automatic adjustment of resource control settings at the resource pool level. In this paper, we present AppRM, a tool that automatically sets resource controls for both virtual machines and resource pools to meet application SLOs. AppRM contains a hierarchy of virtual application managers and resource pool managers. At the application level, AppRM translates performance objectives into the appropriate resource control settings for the individual VMs running that application. At the resource pool level, AppRM ensures that all important applications within the resource pool can meet their performance targets by adjusting controls at the resource pool level. Experimental results under a variety of dynamically changing workloads composed by multi-tiered applications demonstrate the effectiveness of AppRM. In all cases, AppRM is able to deliver application performance satisfaction without manual intervention.", "title": "" }, { "docid": "neg:1840297_10", "text": "The face image is the most accessible biometric modality which is used for highly accurate face recognition systems, while it is vulnerable to many different types of presentation attacks. Face anti-spoofing is a very critical step before feeding the face image to biometric systems. In this paper, we propose a novel two-stream CNN-based approach for face anti-spoofing, by extracting the local features and holistic depth maps from the face images. The local features facilitate CNN to discriminate the spoof patches independent of the spatial face areas. On the other hand, holistic depth map examine whether the input image has a face-like depth. Extensive experiments are conducted on the challenging databases (CASIA-FASD, MSU-USSA, and Replay Attack), with comparison to the state of the art.", "title": "" }, { "docid": "neg:1840297_11", "text": "We propose a method for learning from streaming visual data using a compact, constant size representation of all the data that was seen until a given moment. Specifically, we construct a “coreset” representation of streaming data using a parallelized algorithm, which is an approximation of a set with relation to the squared distances between this set and all other points in its ambient space. We learn an adaptive object appearance model from the coreset tree in constant time and logarithmic space and use it for object tracking by detection. Our method obtains excellent results for object tracking on three standard datasets over more than 100 videos. The ability to summarize data efficiently makes our method ideally suited for tracking in long videos in presence of space and time constraints. We demonstrate this ability by outperforming a variety of algorithms on the TLD dataset with 2685 frames on average. This coreset based learning approach can be applied for both real-time learning of small, varied data and fast learning of big data.", "title": "" }, { "docid": "neg:1840297_12", "text": "We just released an Open Source receiver that is able to decode IEEE 802.11a/g/p Orthogonal Frequency Division Multiplexing (OFDM) frames in software. This is the first Software Defined Radio (SDR) based OFDM receiver supporting channel bandwidths up to 20MHz that is not relying on additional FPGA code. Our receiver comprises all layers from the physical up to decoding the MAC packet and extracting the payload of IEEE 802.11a/g/p frames. In our demonstration, visitors can interact live with the receiver while it is decoding frames that are sent over the air. The impact of moving the antennas and changing the settings are displayed live in time and frequency domain. Furthermore, the decoded frames are fed to Wireshark where the WiFi traffic can be further investigated. It is possible to access and visualize the data in every decoding step from the raw samples, the autocorrelation used for frame detection, the subcarriers before and after equalization, up to the decoded MAC packets. The receiver is completely Open Source and represents one step towards experimental research with SDR.", "title": "" }, { "docid": "neg:1840297_13", "text": "H2O2 has been found to be required for the activity of the main microbial enzymes responsible for lignin oxidative cleavage, peroxidases. Along with other small radicals, it is implicated in the early attack of plant biomass by fungi. Among the few extracellular H2O2-generating enzymes known are the glyoxal oxidases (GLOX). GLOX is a copper-containing enzyme, sharing high similarity at the level of active site structure and chemistry with galactose oxidase. Genes encoding GLOX enzymes are widely distributed among wood-degrading fungi especially white-rot degraders, plant pathogenic and symbiotic fungi. GLOX has also been identified in plants. Although widely distributed, only few examples of characterized GLOX exist. The first characterized fungal GLOX was isolated from Phanerochaete chrysosporium. The GLOX from Utilago maydis has a role in filamentous growth and pathogenicity. More recently, two other glyoxal oxidases from the fungus Pycnoporus cinnabarinus were also characterized. In plants, GLOX from Vitis pseudoreticulata was found to be implicated in grapevine defence mechanisms. Fungal GLOX were found to be activated by peroxidases in vitro suggesting a synergistic and regulatory relationship between these enzymes. The substrates oxidized by GLOX are mainly aldehydes generated during lignin and carbohydrates degradation. The reactions catalysed by this enzyme such as the oxidation of toxic molecules and the production of valuable compounds (organic acids) makes GLOX a promising target for biotechnological applications. This aspect on GLOX remains new and needs to be investigated.", "title": "" }, { "docid": "neg:1840297_14", "text": "In this paper, we focus on a recent Web trend called microblogging, and in particular a site called Twitter. The content of such a site is an extraordinarily large number of small textual messages, posted by millions of users, at random or in response to perceived events or situations. We have developed an algorithm that takes a trending phrase or any phrase specified by a user, collects a large number of posts containing the phrase, and provides an automatically created summary of the posts related to the term. We present examples of summaries we produce along with initial evaluation.", "title": "" }, { "docid": "neg:1840297_15", "text": "The motion of a plane can be described by a homography. We study how to parameterize homographies to maximize plane estimation performance. We compare the usual 3 × 3 matrix parameterization with a parameterization that combines 4 fixed points in one of the images with 4 variable points in the other image. We empirically show that this 4pt parameterization is far superior. We also compare both parameterizations with a variety of direct parameterizations. In the case of unknown relative orientation, we compare with a direct parameterization of the plane equation, and the rotation and translation of the camera(s). We show that the direct parameteri-zation is both less accurate and far less robust than the 4-point parameterization. We explain the poor performance using a measure of independence of the Jacobian images. In the fully calibrated setting, the direct parameterization just consists of 3 parameters of the plane equation. We show that this parameterization is far more robust than the 4-point parameterization, but only approximately as accurate. In the case of a moving stereo rig we find that the direct parameterization of plane equation, camera rotation and translation performs very well, both in terms of accuracy and robustness. This is in contrast to the corresponding direct parameterization in the case of unknown relative orientation. Finally, we illustrate the use of plane estimation in 2 automotive applications.", "title": "" }, { "docid": "neg:1840297_16", "text": "A physical map has been constructed of the human genome containing 15,086 sequence-tagged sites (STSs), with an average spacing of 199 kilobases. The project involved assembly of a radiation hybrid map of the human genome containing 6193 loci and incorporated a genetic linkage map of the human genome containing 5264 loci. This information was combined with the results of STS-content screening of 10,850 loci against a yeast artificial chromosome library to produce an integrated map, anchored by the radiation hybrid and genetic maps. The map provides radiation hybrid coverage of 99 percent and physical coverage of 94 percent of the human genome. The map also represents an early step in an international project to generate a transcript map of the human genome, with more than 3235 expressed sequences localized. The STSs in the map provide a scaffold for initiating large-scale sequencing of the human genome.", "title": "" }, { "docid": "neg:1840297_17", "text": "Broadband use is booming around the globe as the infrastructure is built to provide high speed Internet and Internet Protocol television (IPTV) services. Driven by fierce competition and the search for increasing average revenue per user (ARPU), operators are evolving so they can deliver services within the home that involve a wide range of technologies, terminals, and appliances, as well as software that is increasingly rich and complex. “It should all work” is the key theme on the end user's mind, yet call centers are confronted with a multitude of consumer problems. The demarcation point between provider network and home network is blurring, in fact, if not yet in the consumer's mind. In this context, operators need to significantly rethink service lifecycle management. This paper explains how home and access support systems cover the most critical part of the network in service delivery. They build upon the inherent operation support features of access multiplexers, network termination devices, and home devices to allow the planning, fulfillment, operation, and assurance of new services.", "title": "" }, { "docid": "neg:1840297_18", "text": "We introduce several probabilistic models for learning the lexicon of a semantic parser. Lexicon learning is the first step of training a semantic parser for a new application domain and the quality of the learned lexicon significantly affects both the accuracy and efficiency of the final semantic parser. Existing work on lexicon learning has focused on heuristic methods that lack convergence guarantees and require significant human input in the form of lexicon templates or annotated logical forms. In contrast, our probabilistic models are trained directly from question/answer pairs using EM and our simplest model has a concave objective that guarantees convergence to a global optimum. An experimental evaluation on a set of 4th grade science questions demonstrates that our models improve semantic parser accuracy (35-70% error reduction) and efficiency (4-25x more sentences per second) relative to prior work despite using less human input. Our models also obtain competitive results on GEO880 without any datasetspecific engineering.", "title": "" }, { "docid": "neg:1840297_19", "text": "This paper presents a new nanolubricant for the intermediate gearbox of the Apache aircraft. Historically, the intermediate gearbox has been prone for grease leaking and this natural-occurring fault has negatively impacted the airworthiness of the aircraft. In this study, the incorporation of graphite nanoparticles in mobile aviation gear oil is presented as a nanofluid with excellent thermo-physical properties. Condition-based maintenance practices are demonstrated where four nanoparticle additive oil samples with different concentrations are tested in a full-scale tail rotor drive-train test stand, in addition to, a baseline sample for comparison purposes. Different condition monitoring results suggest the capacity of the nanofluids to have significant gearbox performance benefits when compared to the base oil.", "title": "" } ]
1840298
Word2Vec and Doc2Vec in Unsupervised Sentiment Analysis of Clinical Discharge Summaries
[ { "docid": "pos:1840298_0", "text": "In this paper we present a linguistic resource for the lexical representation of affective knowledge. This resource (named W ORDNETAFFECT) was developed starting from W ORDNET, through a selection and tagging of a subset of synsets representing the affective", "title": "" }, { "docid": "pos:1840298_1", "text": "Physicians and nurses express their judgments and observations towards a patient’s health status in clinical narratives. Thus, their judgments are explicitly or implicitly included in patient records. To get impressions on the current health situation of a patient or on changes in the status, analysis and retrieval of this subjective content is crucial. In this paper, we approach this question as sentiment analysis problem and analyze the feasibility of assessing these judgments in clinical text by means of general sentiment analysis methods. Specifically, the word usage in clinical narratives and in a general text corpus is compared. The linguistic characteristics of judgments in clinical narratives are collected. Besides, the requirements for sentiment analysis and retrieval from clinical narratives are derived.", "title": "" } ]
[ { "docid": "neg:1840298_0", "text": "UNLABELLED\nA previously characterized rice hull smoke extract (RHSE) was tested for bactericidal activity against Salmonella Typhimurium using the disc-diffusion method. The minimum inhibitory concentration (MIC) value of RHSE was 0.822% (v/v). The in vivo antibacterial activity of RHSE (1.0%, v/v) was also examined in a Salmonella-infected Balb/c mouse model. Mice infected with a sublethal dose of the pathogens were administered intraperitoneally a 1.0% solution of RHSE at four 12-h intervals during the 48-h experimental period. The results showed that RHSE inhibited bacterial growth by 59.4%, 51.4%, 39.6%, and 28.3% compared to 78.7%, 64.6%, 59.2%, and 43.2% inhibition with the medicinal antibiotic vancomycin (20 mg/mL). By contrast, 4 consecutive administrations at 12-h intervals elicited the most effective antibacterial effect of 75.0% and 85.5% growth reduction of the bacteria by RHSE and vancomycin, respectively. The combination of RHSE and vancomycin acted synergistically against the pathogen. The inclusion of RHSE (1.0% v/w) as part of a standard mouse diet fed for 2 wk decreased mortality of 10 mice infected with lethal doses of the Salmonella. Photomicrographs of histological changes in liver tissues show that RHSE also protected the liver against Salmonella-induced pathological necrosis lesions. These beneficial results suggest that the RHSE has the potential to complement wood-derived smokes as antimicrobial flavor formulations for application to human foods and animal feeds.\n\n\nPRACTICAL APPLICATION\nThe new antimicrobial and anti-inflammatory rice hull derived liquid smoke has the potential to complement widely used wood-derived liquid smokes as an antimicrobial flavor and health-promoting formulation for application to foods.", "title": "" }, { "docid": "neg:1840298_1", "text": "Latent variable time-series models are among the most heavily used tools from machine learning and applied statistics. These models have the advantage of learning latent structure both from noisy observations and from the temporal ordering in the data, where it is assumed that meaningful correlation structure exists across time. A few highly-structured models, such as the linear dynamical system with linear-Gaussian observations, have closed-form inference procedures (e.g. the Kalman Filter), but this case is an exception to the general rule that exact posterior inference in more complex generative models is intractable. Consequently, much work in time-series modeling focuses on approximate inference procedures for one particular class of models. Here, we extend recent developments in stochastic variational inference to develop a ‘black-box’ approximate inference technique for latent variable models with latent dynamical structure. We propose a structured Gaussian variational approximate posterior that carries the same intuition as the standard Kalman filter-smoother but, importantly, permits us to use the same inference approach to approximate the posterior of much more general, nonlinear latent variable generative models. We show that our approach recovers accurate estimates in the case of basic models with closed-form posteriors, and more interestingly performs well in comparison to variational approaches that were designed in a bespoke fashion for specific non-conjugate models.", "title": "" }, { "docid": "neg:1840298_2", "text": "Enterprises and service providers are increasingly looking to global service delivery as a means for containing costs while improving the quality of service delivery. However, it is often difficult to effectively manage the conflicting needs associated with dynamic customer workload, strict service level constraints, and efficient service personnel organization. In this paper we propose a dynamic approach for workload and personnel management, where organization of personnel is dynamically adjusted based upon differences between observed and target service level metrics. Our approach consists of constructing a dynamic service delivery organization and developing a feedback control mechanism for dynamic workload management. We demonstrate the effectiveness of the proposed approach in an IT incident management example designed based on a large service delivery environment handling more than ten thousand service requests over a period of six months.", "title": "" }, { "docid": "neg:1840298_3", "text": "As fault diagnosis and prognosis systems in aerospace applications become more capable, the ability to utilize information supplied by them becomes increasingly important. While certain types of vehicle health data can be effectively processed and acted upon by crew or support personnel, others, due to their complexity or time constraints, require either automated or semi-automated reasoning. Prognostics-enabled Decision Making (PDM) is an emerging research area that aims to integrate prognostic health information and knowledge about the future operating conditions into the process of selecting subsequent actions for the system. The newly developed PDM algorithms require suitable software and hardware platforms for testing under realistic fault scenarios. The paper describes the development of such a platform, based on the K11 planetary rover prototype. A variety of injectable fault modes are being investigated for electrical, mechanical, and power subsystems of the testbed, along with methods for data collection and processing. In addition to the hardware platform, a software simulator with matching capabilities has been developed. The simulator allows for prototyping and initial validation of the algorithms prior to their deployment on the K11. The simulator is also available to the PDM algorithms to assist with the reasoning process. A reference set of diagnostic, prognostic, and decision making algorithms is also described, followed by an overview of the current test scenarios and the results of their execution on the simulator. Edward Balaban et.al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 United States License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.", "title": "" }, { "docid": "neg:1840298_4", "text": "In this paper we report on our ongoing studies around the application of Augmented Reality methods to support the order picking process of logistics applications. Order picking is the gathering of goods out of a prepared range of items following some customer orders. We named the visual support of this order picking process using Head-mounted Displays “Pick-by-Vision”. This work presents the case study of bringing our previously developed Pickby-Vision system from the lab to an experimental factory hall to evaluate it under more realistic conditions. This includes the execution of two user studies. In the first one we compared our Pickby-Vision system with and without tracking to picking using a paper list to check picking performance and quality in general. In a second test we had subjects using the Pick-by-Vision system continuously for two hours to gain in-depth insight into the longer use of our system, checking user strain besides the general performance. Furthermore, we report on the general obstacles of trying to use HMD-based AR in an industrial setup and discuss our observations of user behaviour.", "title": "" }, { "docid": "neg:1840298_5", "text": "While studies of social movements have mostly examined prevalent public discourses, undercurrents' the backstage practices consisting of meaning-making processes, narratives, and situated work-have received less attention. Through a qualitative interview study with sixteen participants, we examine the role of social media in supporting the undercurrents of the Umbrella Movement in Hong Kong. Interviews focused on an intense period of the movement exemplified by sit-in activities inspired by Occupy Wall Street in the USA. Whereas the use of Facebook for public discourse was similar to what has been reported in other studies, we found that an ecology of social media tools such as Facebook, WhatsApp, Telegram, and Google Docs mediated undercurrents that served to ground the public discourse of the movement. We discuss how the undercurrents sustained and developed public discourses in concrete ways.", "title": "" }, { "docid": "neg:1840298_6", "text": "In this paper, we study the effect of using-grams (sequences of words of length n) for text categorization. We use an efficient algorithm for gener ating suchn-gram features in two benchmark domains, the 20 newsgroups data set and 21,578 REU TERS newswire articles. Our results with the rule learning algorithm R IPPER indicate that, after the removal of stop words, word sequences of length 2 or 3 are most useful. Using l o er sequences reduces classification performance.", "title": "" }, { "docid": "neg:1840298_7", "text": "Software defects, commonly known as bugs, present a serious challenge for system reliability and dependability. Once a program failure is observed, the debugging activities to locate the defects are typically nontrivial and time consuming. In this paper, we propose a novel automated approach to pin-point the root-causes of software failures.\n Our proposed approach consists of three steps. The first step is bug prediction, which leverages the existing work on anomaly-based bug detection as exceptional behavior during program execution has been shown to frequently point to the root cause of a software failure. The second step is bug isolation, which eliminates false-positive bug predictions by checking whether the dynamic forward slices of bug predictions lead to the observed program failure. The last step is bug validation, in which the isolated anomalies are validated by dynamically nullifying their effects and observing if the program still fails. The whole bug prediction, isolation and validation process is fully automated and can be implemented with efficient architectural support. Our experiments with 6 programs and 7 bugs, including a real bug in the gcc 2.95.2 compiler, show that our approach is highly effective at isolating only the relevant anomalies. Compared to state-of-art debugging techniques, our proposed approach pinpoints the defect locations more accurately and presents the user with a much smaller code set to analyze.", "title": "" }, { "docid": "neg:1840298_8", "text": "Cloud computing opens a new era in IT as it can provide various elastic and scalable IT services in a pay-as-you-go fashion, where its users can reduce the huge capital investments in their own IT infrastructure. In this philosophy, users of cloud storage services no longer physically maintain direct control over their data, which makes data security one of the major concerns of using cloud. Existing research work already allows data integrity to be verified without possession of the actual data file. When the verification is done by a trusted third party, this verification process is also called data auditing, and this third party is called an auditor. However, such schemes in existence suffer from several common drawbacks. First, a necessary authorization/authentication process is missing between the auditor and cloud service provider, i.e., anyone can challenge the cloud service provider for a proof of integrity of certain file, which potentially puts the quality of the so-called `auditing-as-a-service' at risk; Second, although some of the recent work based on BLS signature can already support fully dynamic data updates over fixed-size data blocks, they only support updates with fixed-sized blocks as basic unit, which we call coarse-grained updates. As a result, every small update will cause re-computation and updating of the authenticator for an entire file block, which in turn causes higher storage and communication overheads. In this paper, we provide a formal analysis for possible types of fine-grained data updates and propose a scheme that can fully support authorized auditing and fine-grained update requests. Based on our scheme, we also propose an enhancement that can dramatically reduce communication overheads for verifying small updates. Theoretical analysis and experimental results demonstrate that our scheme can offer not only enhanced security and flexibility, but also significantly lower overhead for big data applications with a large number of frequent small updates, such as applications in social media and business transactions.", "title": "" }, { "docid": "neg:1840298_9", "text": "Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper, we are able to evolve extremely small recurrent neural network (RNN) controllers for a task that previously required networks with over a million weights. The high-dimensional visual input, which the controller would normally receive, is first transformed into a compact feature vector through a deep, max-pooling convolutional neural network (MPCNN). Both the MPCNN preprocessor and the RNN controller are evolved successfully to control a car in the TORCS racing simulator using only visual input. This is the first use of deep learning in the context evolutionary RL.", "title": "" }, { "docid": "neg:1840298_10", "text": "This document describes an extension of the One-Time Password (OTP) algorithm, namely the HMAC-based One-Time Password (HOTP) algorithm, as defined in RFC 4226, to support the time-based moving factor. The HOTP algorithm specifies an event-based OTP algorithm, where the moving factor is an event counter. The present work bases the moving factor on a time value. A time-based variant of the OTP algorithm provides short-lived OTP values, which are desirable for enhanced security. The proposed algorithm can be used across a wide range of network applications, from remote Virtual Private Network (VPN) access and Wi-Fi network logon to transaction-oriented Web applications. The authors believe that a common and shared algorithm will facilitate adoption of two-factor authentication on the Internet by enabling interoperability across commercial and open-source implementations. (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.", "title": "" }, { "docid": "neg:1840298_11", "text": "A vehicle tracking system is very useful for tracking the movement of a vehicle from any location at any time. In this work, real time Google map and Arduino based vehicle tracking system is implemented with Global Positioning System (GPS) and Global system for mobile communication (GSM) technology. GPS module provides geographic coordinates at regular time intervals. Then the GSM module transmits the location of vehicle to cell phone of owner/user in terms of latitude and longitude. At the same time, location is displayed on LCD. Finally, Google map displays the location and name of the place on cell phone. Thus, owner/user will be able to continuously monitor a moving vehicle using the cell phone. In order to show the feasibility and effectiveness of the system, this work presents experimental result of the vehicle tracking system. The proposed system is user friendly and ensures safety and surveillance at low maintenance cost.", "title": "" }, { "docid": "neg:1840298_12", "text": "Though autonomous vehicles are currently operating in several places, many important questions within the field of autonomous vehicle research remain to be addressed satisfactorily. In this paper, we examine the role of communication between pedestrians and autonomous vehicles at unsignalized intersections. The nature of interaction between pedestrians and autonomous vehicles remains mostly in the realm of speculation currently. Of course, pedestrian’s reactions towards autonomous vehicles will gradually change over time owing to habituation, but it is clear that this topic requires urgent and ongoing study, not least of all because engineers require some working model for pedestrian-autonomous-vehicle communication. Our paper proposes a decision-theoretic model that expresses the interaction between a pedestrian and a vehicle. The model considers the interaction between a pedestrian and a vehicle as expressed an MDP, based on prior work conducted by psychologists examining similar experimental conditions. We describe this model and our simulation study of behavior it exhibits. The preliminary results on evaluating the behavior of the autonomous vehicle are promising and we believe it can help reduce the data needed to develop fuller models.", "title": "" }, { "docid": "neg:1840298_13", "text": "Heads of Government from Asia and the Pacific have committed to a malaria-free region by 2030. In 2015, the total number of confirmed cases reported to the World Health Organization by 22 Asia Pacific countries was 2,461,025. However, this was likely a gross underestimate due in part to incidence data not being available from the wide variety of known sources. There is a recognized need for an accurate picture of malaria over time and space to support the goal of elimination. A survey was conducted to gain a deeper understanding of the collection of malaria incidence data for surveillance by National Malaria Control Programmes in 22 countries identified by the Asia Pacific Leaders Malaria Alliance. In 2015–2016, a short questionnaire on malaria surveillance was distributed to 22 country National Malaria Control Programmes (NMCP) in the Asia Pacific. It collected country-specific information about the extent of inclusion of the range of possible sources of malaria incidence data and the role of the private sector in malaria treatment. The findings were used to produce recommendations for the regional heads of government on improving malaria surveillance to inform regional efforts towards malaria elimination. A survey response was received from all 22 target countries. Most of the malaria incidence data collected by NMCPs originated from government health facilities, while many did not collect comprehensive data from mobile and migrant populations, the private sector or the military. All data from village health workers were included by 10/20 countries and some by 5/20. Other sources of data included by some countries were plantations, police and other security forces, sentinel surveillance sites, research or academic institutions, private laboratories and other government ministries. Malaria was treated in private health facilities in 19/21 countries, while anti-malarials were available in private pharmacies in 16/21 and private shops in 6/21. Most countries use primarily paper-based reporting. Most collected malaria incidence data in the Asia Pacific is from government health facilities while data from a wide variety of other known sources are often not included in national surveillance databases. In particular, there needs to be a concerted regional effort to support inclusion of data on mobile and migrant populations and the private sector. There should also be an emphasis on electronic reporting and data harmonization across organizations. This will provide a more accurate and up to date picture of the true burden and distribution of malaria and will be of great assistance in helping realize the goal of malaria elimination in the Asia Pacific by 2030.", "title": "" }, { "docid": "neg:1840298_14", "text": "On 14 November 2016, northeastern South Island of New Zealand was struck by a major moment magnitude (Mw) 7.8 earthquake. Field observations, in conjunction with interferometric synthetic aperture radar, Global Positioning System, and seismology data, reveal this to be one of the most complex earthquakes ever recorded. The rupture propagated northward for more than 170 kilometers along both mapped and unmapped faults before continuing offshore at the island’s northeastern extent. Geodetic and field observations reveal surface ruptures along at least 12 major faults, including possible slip along the southern Hikurangi subduction interface; extensive uplift along much of the coastline; and widespread anelastic deformation, including the ~8-meter uplift of a fault-bounded block. This complex earthquake defies many conventional assumptions about the degree to which earthquake ruptures are controlled by fault segmentation and should motivate reevaluation of these issues in seismic hazard models.", "title": "" }, { "docid": "neg:1840298_15", "text": "In this paper, we focus on how to boost the multi-view clustering by exploring the complementary information among multi-view features. A multi-view clustering framework, called Diversity-induced Multi-view Subspace Clustering (DiMSC), is proposed for this task. In our method, we extend the existing subspace clustering into the multi-view domain, and utilize the Hilbert Schmidt Independence Criterion (HSIC) as a diversity term to explore the complementarity of multi-view representations, which could be solved efficiently by using the alternating minimizing optimization. Compared to other multi-view clustering methods, the enhanced complementarity reduces the redundancy between the multi-view representations, and improves the accuracy of the clustering results. Experiments on both image and video face clustering well demonstrate that the proposed method outperforms the state-of-the-art methods.", "title": "" }, { "docid": "neg:1840298_16", "text": "Electro-rheological (ER) fluids are smart fluids which can transform into solid-like phase by applying an electric field. This process is reversible and can be strategically used to build fluidic components for innovative soft robots capable of soft locomotion. In this work, we show the potential applications of ER fluids to build valves that simplify design of fluidic based soft robots. We propose the design and development of a composite ER valve, aimed at controlling the flexibility of soft robots bodies by controlling the ER fluid flow. We present how an ad hoc number of such soft components can be embodied in a simple crawling soft robot (Wormbot); in a locomotion mechanism capable of forward motion through rotation; and, in a tendon driven continuum arm. All these embodiments show how simplification of the hydraulic circuits relies on the simple structure of ER valves. Finally, we address preliminary experiments to characterize the behavior of Wormbot in terms of actuation forces.", "title": "" }, { "docid": "neg:1840298_17", "text": "The goal of all vitreous surgery is to perform the desired intraoperative intervention with minimum collateral damage in the most efficient way possible. An understanding of the principles of fluidics is of importance to all vitreoretinal surgeons to achieve these aims. Advances in technology mean that surgeons are being given increasing choice in the settings they are able to select for surgery. Manufacturers are marketing systems with aspiration driven by peristaltic, Venturi and hybrid pumps. Increasingly fast cut rates are offered with optimised, and in some cases surgeon-controlled, duty cycles. Function-specific cutters are becoming available and narrow-gauge instrumentation is evolving to meet surgeon demands with higher achievable flow rates. In parallel with the developments in outflow technology, infusion systems are advancing with lowering flow resistance and intraocular pressure control to improve fluidic stability during surgery. This review discusses the important aspects of fluidic technology so that surgeons can select the optimum machine parameters to carry out safe and effective surgery.", "title": "" }, { "docid": "neg:1840298_18", "text": "Fine-grained image categories recognition is a challenging task aiming at distinguishing objects belonging to the same basic-level category, such as leaf or mushroom. It is a useful technique that can be applied for species recognition, face verification, and etc. Most of the existing methods have difficulties to automatically detect discriminative object components. In this paper, we propose a new fine-grained image categorization model that can be deemed as an improved version spatial pyramid matching (SPM). Instead of the conventional SPM that enumeratively conducts cell-to-cell matching between images, the proposed model combines multiple cells into cellets that are highly responsive to object fine-grained categories. In particular, we describe object components by cellets that connect spatially adjacent cells from the same pyramid level. Straightforwardly, image categorization can be casted as the matching between cellets extracted from pairwise images. Toward an effective matching process, a hierarchical sparse coding algorithm is derived that represents each cellet by a linear combination of the basis cellets. Further, a linear discriminant analysis (LDA)-like scheme is employed to select the cellets with high discrimination. On the basis of the feature vector built from the selected cellets, fine-grained image categorization is conducted by training a linear SVM. Experimental results on the Caltech-UCSD birds, the Leeds butterflies, and the COSMIC insects data sets demonstrate our model outperforms the state-of-the-art. Besides, the visualized cellets show discriminative object parts are localized accurately.", "title": "" }, { "docid": "neg:1840298_19", "text": "This paper presents the basic results for using the parallel coordinate representation as a high dimensional data analysis tool. Several alternatives are reviewed. The basic algorithm for parallel coordinates is laid out and a discussion of its properties as a projective transformation are shown. The several of the duality results are discussed along with their interpretations as data analysis tools. A discussion of permutations of the parallel coordinate axes is given and some examples are given. Some extensions of the parallel coordinate idea are given. The paper closes with a discussion of implementation and some of our experiences are relayed. 1This research was supported by the Air Force Office of Scientific Research under grant number AFOSR-870179, by the Army Research Office under contract number DAAL03-87-K-0087 and by the National Science Foundation under grant number DMS-8701931 . Hyperdimensional Data Analysis Using Parallel Coordinates", "title": "" } ]
1840299
Mechanical design and basic analysis of a modular robot with special climbing and manipulation functions
[ { "docid": "pos:1840299_0", "text": "This paper is concerned with the derivation of the kinematics model of the University of Tehran-Pole Climbing Robot (UT-PCR). As the first step, an appropriate set of coordinates is selected and used to describe the state of the robot. Nonholonomic constraints imposed by the wheels are then expressed as a set of differential equations. By describing these equations in terms of the state of the robot an underactuated driftless nonlinear control system with affine inputs that governs the motion of the robot is derived. A set of experimental results are also given to show the capability of the UT-PCR in climbing a stepped pole.", "title": "" } ]
[ { "docid": "neg:1840299_0", "text": "We show that information about social relationships can be used to improve user-level sentiment analysis. The main motivation behind our approach is that users that are somehow \"connected\" may be more likely to hold similar opinions; therefore, relationship information can complement what we can extract about a user's viewpoints from their utterances. Employing Twitter as a source for our experimental data, and working within a semi-supervised framework, we propose models that are induced either from the Twitter follower/followee network or from the network in Twitter formed by users referring to each other using \"@\" mentions. Our transductive learning results reveal that incorporating social-network information can indeed lead to statistically significant sentiment classification improvements over the performance of an approach based on Support Vector Machines having access only to textual features.", "title": "" }, { "docid": "neg:1840299_1", "text": "The rising popularity of Android and the GUI-driven nature of its apps have motivated the need for applicable automated GUI testing techniques. Although exhaustive testing of all possible combinations is the ideal upper bound in combinatorial testing, it is often infeasible, due to the combinatorial explosion of test cases. This paper presents TrimDroid, a framework for GUI testing of Android apps that uses a novel strategy to generate tests in a combinatorial, yet scalable, fashion. It is backed with automated program analysis and formally rigorous test generation engines. TrimDroid relies on program analysis to extract formal specifications. These specifications express the app's behavior (i.e., control flow between the various app screens) as well as the GUI elements and their dependencies. The dependencies among the GUI elements comprising the app are used to reduce the number of combinations with the help of a solver. Our experiments have corroborated TrimDroid's ability to achieve a comparable coverage as that possible under exhaustive GUI testing using significantly fewer test cases.", "title": "" }, { "docid": "neg:1840299_2", "text": "This paper presents a low profile ultrawideband tightly coupled phased array antenna with integrated feedlines. The aperture array consists of planar element pairs with fractal geometry. In each element these pairs are set orthogonal to each other for dual polarisation. The design is an array of closely capacitively coupled pairs of fractal octagonal rings. The adjustment of the capacitive load at the tip end of the elements and the strong mutual coupling between the elements, enables a wideband conformal performance. Adding a ground plane below the array partly compensates for the frequency variation of the array impedance, providing further enhancement in the array bandwidth. Additional improvement is achieved by placing another layer of conductive elements at a defined distance above the radiating elements. A Genetic Algorithm was scripted in MATLAB and combined with the HFSS simulator, providing an easy optimisation tool across the operational bandwidth for the array unit cell design parameters. The proposed antenna shows a wide-scanning ability with a low cross-polarisation level over a wide bandwidth.", "title": "" }, { "docid": "neg:1840299_3", "text": "Widespread deployment of the Internet enabled building of an emerging IT delivery model, i.e., cloud computing. Albeit cloud computing-based services have rapidly developed, their security aspects are still at the initial stage of development. In order to preserve cybersecurity in cloud computing, cybersecurity information that will be exchanged within it needs to be identified and discussed. For this purpose, we propose an ontological approach to cybersecurity in cloud computing. We build an ontology for cybersecurity operational information based on actual cybersecurity operations mainly focused on non-cloud computing. In order to discuss necessary cybersecurity information in cloud computing, we apply the ontology to cloud computing. Through the discussion, we identify essential changes in cloud computing such as data-asset decoupling and clarify the cybersecurity information required by the changes such as data provenance and resource dependency information.", "title": "" }, { "docid": "neg:1840299_4", "text": "Corpus-based set expansion (i.e., finding the “complete” set of entities belonging to the same semantic class, based on a given corpus and a tiny set of seeds) is a critical task in knowledge discovery. It may facilitate numerous downstream applications, such as information extraction, taxonomy induction, question answering, and web search. To discover new entities in an expanded set, previous approaches either make one-time entity ranking based on distributional similarity, or resort to iterative pattern-based bootstrapping. The core challenge for these methods is how to deal with noisy context features derived from free-text corpora, which may lead to entity intrusion and semantic drifting. In this study, we propose a novel framework, SetExpan, which tackles this problem, with two techniques: (1) a context feature selection method that selects clean context features for calculating entity-entity distributional similarity, and (2) a ranking-based unsupervised ensemble method for expanding entity set based on denoised context features. Experiments on three datasets show that SetExpan is robust and outperforms previous state-of-the-art methods in terms of mean average precision.", "title": "" }, { "docid": "neg:1840299_5", "text": "Steganography is a method of hiding secret messages in a cover object while communication takes place between sender and receiver. Security of confidential information has always been a major issue from the past times to the present time. It has always been the interested topic for researchers to develop secure techniques to send data without revealing it to anyone other than the receiver. Therefore from time to time researchers have developed many techniques to fulfill secure transfer of data and steganography is one of them. In this paper we have proposed a new technique of image steganography i.e. Hash-LSB with RSA algorithm for providing more security to data as well as our data hiding method. The proposed technique uses a hash function to generate a pattern for hiding data bits into LSB of RGB pixel values of the cover image. This technique makes sure that the message has been encrypted before hiding it into a cover image. If in any case the cipher text got revealed from the cover image, the intermediate person other than receiver can't access the message as it is in encrypted form.", "title": "" }, { "docid": "neg:1840299_6", "text": "We show how a simple convolutional neural network (CNN) can be trained to accurately and robustly regress 6 degrees of freedom (6DoF) 3D head pose, directly from image intensities. We further explain how this FacePoseNet (FPN) can be used to align faces in 2D and 3D as an alternative to explicit facial landmark detection for these tasks. We claim that in many cases the standard means of measuring landmark detector accuracy can be misleading when comparing different face alignments. Instead, we compare our FPN with existing methods by evaluating how they affect face recognition accuracy on the IJB-A and IJB-B benchmarks: using the same recognition pipeline, but varying the face alignment method. Our results show that (a) better landmark detection accuracy measured on the 300W benchmark does not necessarily imply better face recognition accuracy. (b) Our FPN provides superior 2D and 3D face alignment on both benchmarks. Finally, (c), FPN aligns faces at a small fraction of the computational cost of comparably accurate landmark detectors. For many purposes, FPN is thus a far faster and far more accurate face alignment method than using facial landmark detectors.", "title": "" }, { "docid": "neg:1840299_7", "text": "With rapid development, wireless sensor networks (WSNs) have been focused on improving the performance consist of energy efficiency, communication effectiveness, and system throughput. Many novel mechanisms have been implemented by adapting the social behaviors of natural creatures, such as bats, birds, ants, fish and honeybees. These systems are known as nature inspired systems or swarm intelligence in in order to provide optimization strategies, handle large-scale networks and avoid resource constraints. Spider monkey optimization (SMO) is a recent addition to the family of swarm intelligence algorithms by structuring the social foraging behavior of spider monkeys. In this paper, we aim to study the mechanism of SMO in the field of WSNs, formulating the mathematical model of the behavior patterns which cluster-based Spider Monkey Optimization (SMO-C) approach is adapted. In addition, our proposed methodology based on the Spider Monkey's behavioral structure aims to improve the traditional routing protocols in term of low-energy consumption and system quality of the network.", "title": "" }, { "docid": "neg:1840299_8", "text": "We introduce a tree manipulation language, Fast, that overcomes technical limitations of previous tree manipulation languages, such as XPath and XSLT which do not support precise program analysis, or TTT and Tiburon which only support trees over finite alphabets. At the heart of Fast is a combination of SMT solvers and tree transducers, enabling it to model programs whose input and output can range over any decidable theory. The language can express multiple applications. We write an HTML “sanitizer” in Fast and obtain results comparable to leading libraries but with smaller code. Next we show how augmented reality “tagging” applications can be checked for potential overlap in milliseconds using Fast type checking. We show how transducer composition enables deforestation for improved performance. Overall, we strike a balance between expressiveness and precise analysis that works for a large class of important tree-manipulating programs.", "title": "" }, { "docid": "neg:1840299_9", "text": "BACKGROUND\nAgricultural systems are amended ecosystems with a variety of properties. Modern agroecosystems have tended towards high through-flow systems, with energy supplied by fossil fuels directed out of the system (either deliberately for harvests or accidentally through side effects). In the coming decades, resource constraints over water, soil, biodiversity and land will affect agricultural systems. Sustainable agroecosystems are those tending to have a positive impact on natural, social and human capital, while unsustainable systems feed back to deplete these assets, leaving fewer for the future. Sustainable intensification (SI) is defined as a process or system where agricultural yields are increased without adverse environmental impact and without the conversion of additional non-agricultural land. The concept does not articulate or privilege any particular vision or method of agricultural production. Rather, it emphasizes ends rather than means, and does not pre-determine technologies, species mix or particular design components. The combination of the terms 'sustainable' and 'intensification' is an attempt to indicate that desirable outcomes around both more food and improved environmental goods and services could be achieved by a variety of means. Nonetheless, it remains controversial to some.\n\n\nSCOPE AND CONCLUSIONS\nThis review analyses recent evidence of the impacts of SI in both developing and industrialized countries, and demonstrates that both yield and natural capital dividends can occur. The review begins with analysis of the emergence of combined agricultural-environmental systems, the environmental and social outcomes of recent agricultural revolutions, and analyses the challenges for food production this century as populations grow and consumption patterns change. Emergent criticisms are highlighted, and the positive impacts of SI on food outputs and renewable capital assets detailed. It concludes with observations on policies and incentives necessary for the wider adoption of SI, and indicates how SI could both promote transitions towards greener economies as well as benefit from progress in other sectors.", "title": "" }, { "docid": "neg:1840299_10", "text": "Unlike conventional hydro and tidal barrage installations, water current turbines in open flow can generate power from flowing water with almost zero environmental impact, over a much wider range of sites than those available for conventional tidal power generation. Recent developments in current turbine design are reviewed and some potential advantages of ducted or “diffuser-augmented” current turbines are explored. These include improved safety, protection from weed growth, increased power output and reduced turbine and gearbox size for a given power output. Ducted turbines are not subject to the so-called Betz limit, which defines an upper limit of 59.3% of the incident kinetic energy that can be converted to shaft power by a single actuator disk turbine in open flow. For ducted turbines the theoretical limit depends on (i) the pressure difference that can be created between duct inlet and outlet, and (ii) the volumetric flow through the duct. These factors in turn depend on the shape of the duct and the ratio of duct area to turbine area. Previous investigations by others have found a theoretical limit for a diffuser-augmented wind turbine of about 3.3 times the Betz limit, and a model diffuseraugmented wind turbine has extracted 4.25 times the power extracted by the same turbine without a diffuser. In the present study, similar principles applied to a water turbine have so far achieved an augmentation factor of 3 at an early stage of the investigation.", "title": "" }, { "docid": "neg:1840299_11", "text": "A key aim of social psychology is to understand the psychological processes through which independent variables affect dependent variables in the social domain. This objective has given rise to statistical methods for mediation analysis. In mediation analysis, the significance of the relationship between the independent and dependent variables has been integral in theory testing, being used as a basis to determine (1) whether to proceed with analyses of mediation and (2) whether one or several proposed mediator(s) fully or partially accounts for an effect. Synthesizing past research and offering new arguments, we suggest that the collective evidence raises considerable concern that the focus on the significance between the independent and dependent variables, both before and after mediation tests, is unjustified and can impair theory development and testing. To expand theory involving social psychological processes, we argue that attention in mediation analysis should be shifted towards assessing the magnitude and significance of indirect effects. Understanding the psychological processes by which independent variables affect dependent variables in the social domain has long been of interest to social psychologists. Although moderation approaches can test competing psychological mechanisms (e.g., Petty, 2006; Spencer, Zanna, & Fong, 2005), mediation is typically the standard for testing theories regarding process (e.g., Baron & Kenny, 1986; James & Brett, 1984; Judd & Kenny, 1981; MacKinnon, 2008; MacKinnon, Lockwood, Hoffman, West, & Sheets, 2002; Muller, Judd, & Yzerbyt, 2005; Preacher & Hayes, 2004; Preacher, Rucker, & Hayes, 2007; Shrout & Bolger, 2002). For example, dual process models of persuasion (e.g., Petty & Cacioppo, 1986) often distinguish among competing accounts by measuring the postulated underlying process (e.g., thought favorability, thought confidence) and examining their viability as mediators (Tormala, Briñol, & Petty, 2007). Thus, deciding on appropriate requirements for mediation is vital to theory development. Supporting the high status of mediation analysis in our field, MacKinnon, Fairchild, and Fritz (2007) report that research in social psychology accounts for 34% of all mediation tests in psychology more generally. In our own analysis of journal articles published from 2005 to 2009, we found that approximately 59% of articles in the Journal of Personality and Social Psychology (JPSP) and 65% of articles in Personality and Social Psychology Bulletin (PSPB) included at least one mediation test. Consistent with the observations of MacKinnon et al., we found that the bulk of these analyses continue to follow the causal steps approach outlined by Baron and Kenny (1986). Social and Personality Psychology Compass 5/6 (2011): 359–371, 10.1111/j.1751-9004.2011.00355.x a 2011 The Authors Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd The current article examines the viability of the causal steps approach in which the significance of the relationship between an independent variable (X) and a dependent variable (Y) are tested both before and after controlling for a mediator (M) in order to examine the validity of a theory specifying mediation. Traditionally, the X fi Y relationship is tested prior to mediation to determine whether there is an effect to mediate, and it is also tested after introducing a potential mediator to determine whether that mediator fully or partially accounts for the effect. At first glance, the requirement of a significant X fi Y association prior to examining mediation seems reasonable. If there is no significant X fi Y relationship, how can there be any mediation of it? Furthermore, the requirement that X fi Y become nonsignificant when controlling for the mediator seems sensible in order to claim ‘full mediation’. What is the point of hypothesizing or testing for additional mediators if the inclusion of one mediator renders the initial relationship indistinguishable from zero? Despite the intuitive appeal of these requirements, the present article raises serious concerns about their use.", "title": "" }, { "docid": "neg:1840299_12", "text": "A robust road markings detection algorithm is a fundamental component of intelligent vehicles' autonomous navigation in urban environment. This paper presents an algorithm for detecting road markings including zebra crossings, stop lines and lane markings to provide road information for intelligent vehicles. First, to eliminate the impact of the perspective effect, an Inverse Perspective Mapping (IPM) transformation is applied to the images grabbed by the camera; the region of interest (ROI) was extracted from IPM image by a low level processing. Then, different algorithms are adopted to extract zebra crossings, stop lines and lane markings. The experiments on a large number of street scenes in different conditions demonstrate the effectiveness of the proposed algorithm.", "title": "" }, { "docid": "neg:1840299_13", "text": "In the fall of 2013, we offered an open online Introduction to Recommender Systems through Coursera, while simultaneously offering a for-credit version of the course on-campus using the Coursera platform and a flipped classroom instruction model. As the goal of offering this course was to experiment with this type of instruction, we performed extensive evaluation including surveys of demographics, self-assessed skills, and learning intent; we also designed a knowledge-assessment tool specifically for the subject matter in this course, administering it before and after the course to measure learning, and again 5 months later to measure retention. We also tracked students through the course, including separating out students enrolled for credit from those enrolled only for the free, open course.\n Students had significant knowledge gains across all levels of prior knowledge and across all demographic categories. The main predictor of knowledge gain was effort expended in the course. Students also had significant knowledge retention after the course. Both of these results are limited to the sample of students who chose to complete our knowledge tests. Student completion of the course was hard to predict, with few factors contributing predictive power; the main predictor of completion was intent to complete. Students who chose a concepts-only track with hand exercises achieved the same level of knowledge of recommender systems concepts as those who chose a programming track and its added assignments, though the programming students gained additional programming knowledge. Based on the limited data we were able to gather, face-to-face students performed as well as the online-only students or better; they preferred this format to traditional lecture for reasons ranging from pure convenience to the desire to watch videos at a different pace (slower for English language learners; faster for some native English speakers). This article also includes our qualitative observations, lessons learned, and future directions.", "title": "" }, { "docid": "neg:1840299_14", "text": "meaning (overall M = 5.89) and significantly higher than with any of the other three abstract meanings (overall M = 2.05, all ps < .001). Procedure. Under a cover story of studying advertising slogans, participants saw one of the 22 target brands and thought about its abstract concept in memory. They were then presented, on a single screen, with four alternative slogans (in random order) for the target brand and were asked to rank the slogans, from 1 (“best”) to 4 (“worst”), in terms of how well the slogan fits the image of the target brand. Each slogan was intended to distinctively communicate the abstract meaning associated with one of the four high-levelmeaning associated with one of the four high-level brand value dimensions uncovered in the pilot study. After a series of filler tasks, participants indicated their attitude toward the brand on a seven-point scale (1 = “very unfavorable,” and 7 = “very favorable”). Ranking of the slogans. We conducted separate nonparametric Kruskal-Wallis tests on each country’s data to evaluate differences in the rank order for each of the four slogans among the four types of brand concepts. In all countries, the tests were significant (the United States: all 2(3, N = 539) ≥ 145.4, all ps < .001; China: all 2(3, N = 208) ≥ 52.8, all ps < .001; Canada: all 2(3, N = 380) ≥ 33.3, all ps < .001; Turkey: all 2(3, N = 380) ≥ 51.0, all ps < .001). We pooled the data from the four countries and conducted follow-up tests to evaluate pairwise differences in the rank order of each slogan among the four brand concepts, controlling for Type I error across tests using the Bonferroni approach. The results of these tests indicated that each slogan was ranked at the top in terms of favorability when it matched the brand concept (self-enhancement brand concept: Mself-enhancement slogan = 1.77; openness brand FIGURE 2 Structural Relations Among Value Dimensions from Multidimensional Scaling (Pilot: Study 1) b = benevolence, t = tradition, c = conformity, sec = security S e l f E n h a n c e m e n t IN D VID U A L C O N C ER N S C O LL EC TI VE C O N C ER N S", "title": "" }, { "docid": "neg:1840299_15", "text": "Review was undertaken from February 1969 to January 1998 at the State forensic science center (Forensic Science) in Adelaide, South Australia, of all cases of murder-suicide involving children <16 years of age. A total of 13 separate cases were identified involving 30 victims, all of whom were related to the perpetrators. There were 7 male and 6 female perpetrators (age range, 23-41 years; average, 31 years) consisting of 6 mothers, 6 father/husbands, and 1 uncle/son-in-law. The 30 victims consisted of 11 daughters, 11 sons, 1 niece, 1 mother-in-law, and 6 wives of the assailants. The 23 children were aged from 10 months to 15 years (average, 6.0 years). The 6 mothers murdered 9 children and no spouses, with 3 child survivors. The 6 fathers murdered 13 children and 6 wives, with 1 child survivor. This study has demonstrated a higher percentage of female perpetrators than other studies of murder-suicide. The methods of homicide and suicide used were generally less violent among the female perpetrators compared with male perpetrators. Fathers killed not only their children but also their wives, whereas mothers murdered only their children. These results suggest differences between murder-suicides that involve children and adult-only cases, and between cases in which the mother rather than the father is the perpetrator.", "title": "" }, { "docid": "neg:1840299_16", "text": "Facebook and other social media have been hailed as delivering the promise of new, socially engaged educational experiences for students in undergraduate, self-directed, and other educational sectors. A theoretical and historical analysis of these media in the light of earlier media transformations, however, helps to situate and qualify this promise. Specifically, the analysis of dominant social media presented here questions whether social media platforms satisfy a crucial component of learning – fostering the capacity for debate and disagreement. By using the analytical frame of media theorist Raymond Williams, with its emphasis on the influence of advertising in the content and form of television, we weigh the conditions of dominant social networking sites as constraints for debate and therefore learning. Accordingly, we propose an update to Williams’ erudite work that is in keeping with our findings. Williams’ critique focuses on the structural characteristics of sequence, rhythm, and flow of television as a cultural form. Our critique proposes the terms information design, architecture, and above all algorithm, as structural characteristics that similarly apply to the related but contemporary cultural form of social networking services. Illustrating the ongoing salience of media theory and history for research in e-learning, the article updates Williams’ work while leveraging it in a critical discussion of the suitability of commercial social media for education.", "title": "" }, { "docid": "neg:1840299_17", "text": "This paper argues for the utility of back-end driven onloading to the edge as a way to address bandwidth use and latency challenges for future device-cloud interactions. Supporting such edge functions (EFs) requires solutions that can provide (i) fast and scalable EF provisioning and (ii) strong guarantees for the integrity of the EF execution and confidentiality of the state stored at the edge. In response to these goals, we (i) present a detailed design space exploration of the current technologies that can be leveraged in the design of edge function platforms (EFPs), (ii) develop a solution to address security concerns of EFs that leverages emerging hardware support for OS agnostic trusted execution environments such as Intel SGX enclaves, and (iii) propose and evaluate AirBox, a platform for fast, scalable and secure onloading of edge functions.", "title": "" }, { "docid": "neg:1840299_18", "text": "We address the problem of vision-based navigation in busy inner-city locations, using a stereo rig mounted on a mobile platform. In this scenario semantic information becomes important: rather than modelling moving objects as arbitrary obstacles, they should be categorised and tracked in order to predict their future behaviour. To this end, we combine classical geometric world mapping with object category detection and tracking. Object-category specific detectors serve to find instances of the most important object classes (in our case pedestrians and cars). Based on these detections, multi-object tracking recovers the objects’ trajectories, thereby making it possible to predict their future locations, and to employ dynamic path planning. The approach is evaluated on challenging, realistic video sequences recorded at busy inner-city locations.", "title": "" } ]
1840300
An Ensemble Approach for Incremental Learning in Nonstationary Environments
[ { "docid": "pos:1840300_0", "text": "OnlineEnsembleLearning", "title": "" }, { "docid": "pos:1840300_1", "text": "We introduce Learn++, an algorithm for incremental training of neural network (NN) pattern classifiers. The proposed algorithm enables supervised NN paradigms, such as the multilayer perceptron (MLP), to accommodate new data, including examples that correspond to previously unseen classes. Furthermore, the algorithm does not require access to previously used data during subsequent incremental learning sessions, yet at the same time, it does not forget previously acquired knowledge. Learn++ utilizes ensemble of classifiers by generating multiple hypotheses using training data sampled according to carefully tailored distributions. The outputs of the resulting classifiers are combined using a weighted majority voting procedure. We present simulation results on several benchmark datasets as well as a real-world classification task. Initial results indicate that the proposed algorithm works rather well in practice. A theoretical upper bound on the error of the classifiers constructed by Learn++ is also provided.", "title": "" } ]
[ { "docid": "neg:1840300_0", "text": "In this paper, we propose to infer music genre embeddings from audio datasets carrying semantic information about genres. We show that such embeddings can be used for disambiguating genre tags (identification of different labels for the same genre, tag translation from a tag system to another, inference of hierarchical taxonomies on these genre tags). These embeddings are built by training a deep convolutional neural network genre classifier with large audio datasets annotated with a flat tag system. We show empirically that they makes it possible to retrieve the original taxonomy of a tag system, spot duplicates tags and translate tags from a tag system to another.", "title": "" }, { "docid": "neg:1840300_1", "text": "The display units integrated in todays head-mounted displays (HMDs) provide only a limited field of view (FOV) to the virtual world. In order to present an undistorted view to the virtual environment (VE), the perspective projection used to render the VE has to be adjusted to the limitations caused by the HMD characteristics. In particular, the geometric field of view (GFOV), which defines the virtual aperture angle used for rendering of the 3D scene, is set up according to the display's field of view. A discrepancy between these two fields of view distorts the geometry of the VE in a way that either minifies or magnifies the imagery displayed to the user. Discrepancies between the geometric and physical FOV causes the imagery to be minified or magnified. This distortion has the potential to negatively or positively affect a user's perception of the virtual space, sense of presence, and performance on visual search tasks.\n In this paper we analyze if a user is consciously aware of perspective distortions of the VE displayed in the HMD. We introduce a psychophysical calibration method to determine the HMD's actual field of view, which may vary from the nominal values specified by the manufacturer. Furthermore, we conducted an experiment to identify perspective projections for HMDs which are identified as natural by subjects---even if these perspectives deviate from the perspectives that are inherently defined by the display's field of view. We found that subjects evaluate a field of view as natural when it is larger than the actual field of view of the HMD---in some cases up to 50%.", "title": "" }, { "docid": "neg:1840300_2", "text": "In announcing the news that “post-truth” is the Oxford Dictionaries’ 2016 word of the year, the Chicago Tribune declared that “Truth is dead. Facts are passé.”1 Politicians have shoveled this mantra our direction for centuries, but during this past presidential election, they really rubbed our collective faces in it. To be fair, the word “post” isn’t to be taken to mean “after,” as in its normal sense, but rather as “irrelevant.” Careful observers of the recent US political campaigns came to appreciate this difference. Candidates spewed streams of rhetorical effluent that didn’t even pretend to pass the most perfunctory fact-checking smell test. As the Tribune noted, far too many voters either didn’t notice or didn’t care. That said, recognizing an unwelcome phenomenon isn’t the same as legitimizing it, and now the Oxford Dictionaries group has gone too far toward the latter. They say “post-truth” captures the “ethos, mood or preoccupations of [2016] to have lasting potential as a word of cultural significance.”1 I emphatically disagree. I don’t know what post-truth did capture, but it didn’t capture that. We need a phrase for the 2016 mood that’s a better fit. I propose the term “gaudy facts,” for it emphasizes the garish and tawdry nature of the recent political dialog. Further, “gaudy facts” has the advantage of avoiding the word truth altogether, since there’s precious little of that in political discourse anyway. I think our new term best captures the ethos and mood of today’s political delusionists. There’s no ground truth data in sight, all claims are imaginary and unsupported without pretense of facts, and distortion is reality. This seems to fit our present experience well. The only tangible remnant of reality that isn’t subsumed under our new term is the speakers’ underlying narcissism, but at least we’re closer than we were with “post-truth.” We need to forever banish the association of the word “truth” with “politics”—these two terms just don’t play well with each other. Lies, Damn Lies, and Fake News", "title": "" }, { "docid": "neg:1840300_3", "text": "Sensing systems such as biomedical implants, infrastructure monitoring systems, and military surveillance units are constrained to consume only picowatts to nanowatts in standby and active mode, respectively. This tight power budget places ultra-low power demands on all building blocks in the systems. This work proposes a voltage reference for use in such ultra-low power systems, referred to as the 2T voltage reference, which has been demonstrated in silicon across three CMOS technologies. Prototype chips in 0.13 μm show a temperature coefficient of 16.9 ppm/°C (best) and line sensitivity of 0.033%/V, while consuming 2.22 pW in 1350 μm2. The lowest functional Vdd 0.5 V. The proposed design improves energy efficiency by 2 to 3 orders of magnitude while exhibiting better line sensitivity and temperature coefficient in less area, compared to other nanowatt voltage references. For process spread analysis, 49 dies are measured across two runs, showing the design exhibits comparable spreads in TC and output voltage to existing voltage references in the literature. Digital trimming is demonstrated, and assisted one temperature point digital trimming, guided by initial samples with two temperature point trimming, enables TC <; 50 ppm/°C and ±0.35% output precision across all 25 dies. Ease of technology portability is demonstrated with silicon measurement results in 65 nm, 0.13 μm, and 0.18 μm CMOS technologies.", "title": "" }, { "docid": "neg:1840300_4", "text": "This paper presents a design of highly effective triple band microstrip antenna for wireless communication applications. The triple band design is a metamaterial-based design for WLAN and WiMAX (2.4/3.5/5.6 GHz) applications. The triple band response is obtained by etching two circular and one rectangular split ring resonator (SRR) unit cells on the ground plane of a conventional patch operating at 3.56 GHz. The circular cells are introduced to resonate at 5.3 GHz for the upper WiMAX band, while the rectangular cell is designed to resonate at 2.45 GHz for the lower WLAN band. Furthermore, a novel complementary H-shaped unit cell oriented above the triple band antenna is proposed. The proposed H-shaped is being used as a lens to significantly increase the antenna gain. To investigate the left-handed behavior of the proposed H-shaped, extensive parametric study for the placement of each unit cell including the metamaterial lens, which is the main parameter affecting the antenna performance, is presented and discussed comprehensively. Good consistency between the measured and simulated results is achieved. The proposed antenna meets the requirements of WiMAX and WLAN standards with high peak realized gain.", "title": "" }, { "docid": "neg:1840300_5", "text": "Foot-operated computer interfaces have been studied since the inception of human--computer interaction. Thanks to the miniaturisation and decreasing cost of sensing technology, there is an increasing interest exploring this alternative input modality, but no comprehensive overview of its research landscape. In this survey, we review the literature on interfaces operated by the lower limbs. We investigate the characteristics of users and how they affect the design of such interfaces. Next, we describe and analyse foot-based research prototypes and commercial systems in how they capture input and provide feedback. We then analyse the interactions between users and systems from the perspective of the actions performed in these interactions. Finally, we discuss our findings and use them to identify open questions and directions for future research.", "title": "" }, { "docid": "neg:1840300_6", "text": "Peer-to-peer (P2P) lending or crowdlending, is a recent innovation allows a group of individual or institutional lenders to lend funds to individuals or businesses in return for interest payment on top of capital repayments. The rapid growth of P2P lending marketplaces has heightened the need to develop a support system to help lenders make sound lending decisions. But realizing such system is challenging in the absence of formal credit data used by the banking sector. In this paper, we attempt to explore the possible connections between user credit risk and how users behave in the lending sites. We present the first analysis of user detailed clickstream data from a large P2P lending provider. Our analysis reveals that the users’ sequences of repayment histories and financial activities in the lending site, have significant predictive value for their future loan repayments. In the light of this, we propose a deep architecture named DeepCredit, to automatically acquire the knowledge of credit risk from the sequences of activities that users conduct on the site. Experiments on our large-scale real-world dataset show that our model generates a high accuracy in predicting both loan delinquency and default, and significantly outperforms a number of baselines and competitive alternatives.", "title": "" }, { "docid": "neg:1840300_7", "text": "Mobile sensing systems employ various sensors in smartphones to extract human-related information. As the demand for sensing systems increases, a more effective mechanism is required to sense information about human life. In this paper, we present a systematic study on the feasibility and gaining properties of a crowdsensing system that primarily concerns sensing WiFi packets in the air. We propose that this method is effective for estimating urban mobility by using only a small number of participants. During a seven-week deployment, we collected smartphone sensor data, including approximately four million WiFi packets from more than 130,000 unique devices in a city. Our analysis of this dataset examines core issues in urban mobility monitoring, including feasibility, spatio-temporal coverage, scalability, and threats to privacy. Collectively, our findings provide valuable insights to guide the development of new mobile sensing systems for urban life monitoring.", "title": "" }, { "docid": "neg:1840300_8", "text": "This paper presents a novel ac-dc power factor correction (PFC) power conversion architecture for a single-phase grid interface. The proposed architecture has significant advantages for achieving high efficiency, good power factor, and converter miniaturization, especially in low-to-medium power applications. The architecture enables twice-line-frequency energy to be buffered at high voltage with a large voltage swing, enabling reduction in the energy buffer capacitor size and the elimination of electrolytic capacitors. While this architecture can be beneficial with a variety of converter topologies, it is especially suited for the system miniaturization by enabling designs that operate at high frequency (HF, 3-30 MHz). Moreover, we introduce circuit implementations that provide efficient operation in this range. The proposed approach is demonstrated for an LED driver converter operating at a (variable) HF switching frequency (3-10 MHz) from 120 Vac, and supplying a 35 Vdc output at up to 30 W. The prototype converter achieves high efficiency (92%) and power factor (0.89), and maintains a good performance over a wide load range. Owing to the architecture and HF operation, the prototype achieves a high “box” power density of 50 W/in3 (“displacement” power density of 130 W/in3), with miniaturized inductors, ceramic energy buffer capacitors, and a small-volume EMI filter.", "title": "" }, { "docid": "neg:1840300_9", "text": "The ability to exert an appropriate amount of force on brain tissue during surgery is an important component of instrument handling. It allows surgeons to achieve the surgical objective effectively while maintaining a safe level of force in tool-tissue interaction. At the present time, this knowledge, and hence skill, is acquired through experience and is qualitatively conveyed from an expert surgeon to trainees. These forces can be assessed quantitatively by retrofitting surgical tools with sensors, thus providing a mechanism for improved performance and safety of surgery, and enhanced surgical training. This paper presents the development of a force-sensing bipolar forceps, with installation of a sensory system, that is able to measure and record interaction forces between the forceps tips and brain tissue in real time. This research is an extension of a previous research where a bipolar forceps was instrumented to measure dissection and coagulation forces applied in a single direction. Here, a planar forceps with two sets of strain gauges in two orthogonal directions was developed to enable measuring the forces with a higher accuracy. Implementation of two strain gauges allowed compensation of strain values due to deformations of the forceps in other directions (axial stiffening) and provided more accurate forces during microsurgery. An experienced neurosurgeon performed five neurosurgical tasks using the axial setup and repeated the same tasks using the planar device. The experiments were performed on cadaveric brains. Both setups were shown to be capable of measuring real-time interaction forces. Comparing the two setups, under the same experimental condition, indicated that the peak and mean forces quantified by planar forceps were at least 7% and 10% less than those of axial tool, respectively; therefore, utilizing readings of all strain gauges in planar forceps provides more accurate values of both peak and mean forces than axial forceps. Cross-correlation analysis between the two force signals obtained, one from each cadaveric practice, showed a high similarity between the two force signals.", "title": "" }, { "docid": "neg:1840300_10", "text": "Creativity research has traditionally focused on human creativity, and even more specifically, on the psychology of individual creative people. In contrast, computational creativity research involves the development and evaluation of creativity in a computational system. As we study the effect of scaling up from the creativity of a computational system and individual people to large numbers of diverse computational agents and people, we have a new perspective: creativity can ascribed to a computational agent, an individual person, collectives of people and agents and/or their interaction. By asking “Who is being creative?” this paper examines the source of creativity in computational and collective creativity. A framework based on ideation and interaction provides a way of characterizing existing research in computational and collective creativity and identifying directions for future research. Human and Computational Creativity Creativity is a topic of philosophical and scientific study considering the scenarios and human characteristics that facilitate creativity as well as the properties of computational systems that exhibit creative behavior. “The four Ps of creativity”, as introduced in Rhodes (1987) and more recently summarized by Runco (2011), decompose the complexity of creativity into separate but related influences: • Person: characteristics of the individual, • Product: an outcome focus on ideas, • Press: the environmental and contextual factors, • Process: cognitive process and thinking techniques. While the four Ps are presented in the context of the psychology of human creativity, they can be modified for computational creativity if process includes a computational process. The study of human creativity has a focus on the characteristics and cognitive behavior of creative people and the environments in which creativity is facilitated. The study of computational creativity, while inspired by concepts of human creativity, is often expressed in the formal language of search spaces and algorithms. Why do we ask who is being creative? Firstly, there is an increasing interest in understanding computational systems that can formalize or model creative processes and therefore exhibit creative behaviors or acts. Yet there are still skeptics that claim computers aren’t creative, the computer is just following instructions. Second and in contrast, there is increasing interest in computational systems that encourage and enhance human creativity that make no claims about whether the computer is being or could be creative. Finally, as we develop more capable socially intelligent computational systems and systems that enable collective intelligence among humans and computers, the boundary between human creativity and computer creativity blurs. As the boundary blurs, we need to develop ways of recognizing creativity that makes no assumptions about whether the creative entity is a person, a computer, a potentially large group of people, or the collective intelligence of human and computational entities. This paper presents a framework that characterizes the source of creativity from two perspectives, ideation and interaction, as a guide to current and future research in computational and collective creativity. Creativity: Process and Product Understanding the nature of creativity as process and product is critical in computational creativity if we want to avoid any bias that only humans are creative and computers are not. While process and product in creativity are tightly coupled in practice, a distinction between the two provides two ways of recognizing computational creativity by describing the characteristics of a creative process and separately, the characteristics of a creative product. Studying and describing the processes that generate creative products focus on the cognitive behavior of a creative person or the properties of a computational system, and describing ways of recognizing a creative product focus on the characteristics of the result of a creative process. When describing creative processes there is an assumption that there is a space of possibilities. Boden (2003) refers to this as conceptual spaces and describes these spaces as structured styles of thought. In computational systems such a space is called a state space. How such spaces are changed, or the relationship between the set of known products, the space of possibilities, and the potentially creative product, is the basis for describing processes that can generate potentially creative artifacts. There are many accounts of the processes for generating creative products. Two sources are described here: Boden (2003) from the philosophical and artificial intelligence perspective and Gero (2000) from the design science perspective. Boden (2003) describes three ways in which creative products can be generated: combination, exploration, International Conference on Computational Creativity 2012 67 and transformation: each one describes the way in which the conceptual space of known products provides a basis for generating a creative product and how the conceptual space changes as a result of the creative artifact. Combination brings together two or more concepts in ways that hasn’t occurred in existing products. Exploration finds concepts in parts of the space that have not been considered in existing products. Transformation modifies concepts in the space to generate products that change the boundaries of the space. Gero (2000) describes computational processes for creative design as combination, transformation, analogy, emergence, and first principles. Combination and transformation are similar to Boden’s processes. Analogy transfers concepts from a source product that may be in a different conceptual space to a target product to generate a novel product in the target’s space. Emergence is a process that finds new underlying structures in a concept that give rise to a new product, effectively a re-representation process. First principles as a process generates new products without relying on concepts as defined in existing products. While these processes provide insight into the nature of creativity and provide a basis for computational creativity, they have little to say about how we recognize a creative product. As we move towards computational systems that enhance or contribute to human creativity, the articulation of process models for generating creative artifacts does not provide an evaluation of the product. Computational systems that generate creative products need evaluation criteria that are independent of the process by which the product was generated. There are also numerous approaches to defining characteristics of creative products as the basis for evaluating or assessing creativity. Boden (2003) claims that novelty and value are the essential criteria and that other aspects, such as surprise, are kinds of novelty or value. Wiggins (2006) often uses value to indicate all valuable aspects of a creative products, yet provides definitions for novelty and value as different features that are relevant to creativity. Oman and Tumer (2009) combine novelty and quality to evaluate individual ideas in engineering design as a relative measure of creativity. Shah, Smith, and Vargas-Hernandez (2003) associate creative design with ideation and develop metrics for novelty, variety, quality, and quantity of ideas. Wiggins (2006) argues that surprise is a property of the receiver of a creative artifact, that is, it is an emotional response. Cropley and Cropley (2005) propose four broad properties of products that can be used to describe the level and kind of creativity they possess: effectiveness, novelty, elegance, genesis. Besemer and O'Quin (1987) describe a Creative Product Semantic Scale which defines the creativity of products in three dimensions: novelty (the product is original, surprising and germinal), resolution (the product is valuable, logical, useful, and understandable), and elaboration and synthesis (the product is organic, elegant, complex, and well-crafted). Horn and Salvendy (2006) after doing an analysis of many properties of creative products, report on consumer perception of creativity in three critical perceptions: affect (our emotional response to the product), importance, and novelty. Goldenberg and Mazursky (2002) report on research that has found the observable characteristics of creativity in products to include \"original, of value, novel, interesting, elegant, unique, surprising.\" Amabile (1982) says it most clearly when she summarizes the social psychology literature on the assessment of creativity: While most definitions of creativity refer to novelty, appropriateness, and surprise, current creativity tests or assessment techniques are not closely linked to these criteria. She further argues that “There is no clear, explicit statement of the criteria that conceptually underlie the assessment procedures.” In response to an inability to establish and define criteria for evaluating creativity that is acceptable to all domains, Amabile (1982, 1996) introduced a Consensual Assessment Technique (CAT) in which creativity is assessed by a group of judges that are knowledgeable of the field. Since then, several scales for assisting human evaluators have been developed to guide human evaluators, for example, Besemer and O'Quin's (1999) Creative Product Semantic Scale, Reis and Renzulli's (1991) Student Product Assessment Form, and Cropley et al’s (2011) Creative Solution Diagnosis Scale. Maher (2010) presents an AI approach to evaluating creativity of a product by measuring novelty, value and surprise that provides a formal model for evaluating creative products. Novelty is a measure of how different the product is from existing products and is measured as a distance from clusters of other products in a conceptual space, characterizing the artifact as similar but different. Value is a measure of how the creative product co", "title": "" }, { "docid": "neg:1840300_11", "text": "One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a ‘‘model discrepancy’’ term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty. r 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840300_12", "text": "Past research has generated mixed support among social scientists for the utility of social norms in accounting for human behavior. We argue that norms do have a substantial impact on human action; however, the impact can only be properly recognized when researchers (a) separate 2 types of norms that at times act antagonistically in a situation—injunctive norms (what most others approve or disapprove) and descriptive norms (what most others do)—and (b) focus Ss' attention principally on the type of norm being studied. In 5 natural settings, focusing Ss on either the descriptive norms or the injunctive norms regarding littering caused the Ss* littering decisions to change only in accord with the dictates of the then more salient type of norm.", "title": "" }, { "docid": "neg:1840300_13", "text": "Click-through rate (CTR) prediction and relevance ranking are two fundamental problems in web advertising. In this study, we address the problem of modeling the relationship between CTR and relevance for sponsored search. We used normalized relevance scores comparable across all queries to represent relevance when modeling with CTR, instead of directly using human judgment labels or relevance scores valid only within same query. We classified clicks by identifying their relevance quality using dwell time and session information, and compared all clicks versus selective clicks effects when modeling relevance.\n Our results showed that the cleaned click signal outperforms raw click signal and others we explored, in terms of relevance score fitting. The cleaned clicks include clicks with dwell time greater than 5 seconds and last clicks in session. Besides traditional thoughts that there is no linear relation between click and relevance, we showed that the cleaned click based CTR can be fitted well with the normalized relevance scores using a quadratic regression model. This relevance-click model could help to train ranking models using processed click feedback to complement expensive human editorial relevance labels, or better leverage relevance signals in CTR prediction.", "title": "" }, { "docid": "neg:1840300_14", "text": "The use of a general-purpose code, COLSYS, is described. The code is capable of solving mixed-order systems of boundary-value problems in ordinary differential equations. The method of spline collocation at Gaussian points is implemented using a B-spline basis. Approximate solutions are computed on a sequence of automatically selected meshes until a user-specified set of tolerances is satisfied. A damped Newton's method is used for the nonlinear iteration. The code has been found to be particularly effective for difficult problems. It is intended that a user be able to use COLSYS easily after reading its algorithm description. The use of the code is then illustrated by examples demonstrating its effectiveness and capabilities.", "title": "" }, { "docid": "neg:1840300_15", "text": "This paper describes a new prototype system for detecting the demeanor of patients in emergency situations using the Intel RealSense camera system [1]. It describes how machine learning, a support vector machine (SVM) and the RealSense facial detection system can be used to track patient demeanour for pain monitoring. In a lab setting, the application has been trained to detect four different intensities of pain and provide demeanour information about the patient's eyes, mouth, and agitation state. Its utility as a basis for evaluating the condition of patients in situations using video, machine learning and 5G technology is discussed.", "title": "" }, { "docid": "neg:1840300_16", "text": "Deep neural networks are now rivaling human accuracy in several pattern recognition problems. Compared to traditional classifiers, where features are handcrafted, neural networks learn increasingly complex features directly from the data. Instead of handcrafting the features, it is now the network architecture that is manually engineered. The network architecture parameters such as the number of layers or the number of filters per layer and their interconnections are essential for good performance. Even though basic design guidelines exist, designing a neural network is an iterative trial-and-error process that takes days or even weeks to perform due to the large datasets used for training. In this paper, we present DeepEyes, a Progressive Visual Analytics system that supports the design of neural networks during training. We present novel visualizations, supporting the identification of layers that learned a stable set of patterns and, therefore, are of interest for a detailed analysis. The system facilitates the identification of problems, such as superfluous filters or layers, and information that is not being captured by the network. We demonstrate the effectiveness of our system through multiple use cases, showing how a trained network can be compressed, reshaped and adapted to different problems.", "title": "" }, { "docid": "neg:1840300_17", "text": "A nonlinear optimal controller with a fuzzy gain scheduler has been designed and applied to a Line-Of-Sight (LOS) stabilization system. Use of Linear Quadratic Regulator (LQR) theory is an optimal and simple manner of solving many control engineering problems. However, this method cannot be utilized directly for multigimbal LOS systems since they are nonlinear in nature. To adapt LQ controllers to nonlinear systems at least a linearization of the model plant is required. When the linearized model is only valid within the vicinity of an operating point a gain scheduler is required. Therefore, a Takagi-Sugeno Fuzzy Inference System gain scheduler has been implemented, which keeps the asymptotic stability performance provided by the optimal feedback gain approach. The simulation results illustrate that the proposed controller is capable of overcoming disturbances and maintaining a satisfactory tracking performance. Keywords—Fuzzy Gain-Scheduling, Gimbal, Line-Of-Sight Stabilization, LQR, Optimal Control", "title": "" }, { "docid": "neg:1840300_18", "text": "We present Mime, a compact, low-power 3D sensor for unencumbered free-form, single-handed gestural interaction with head-mounted displays (HMDs). Mime introduces a real-time signal processing framework that combines a novel three-pixel time-of-flight (TOF) module with a standard RGB camera. The TOF module achieves accurate 3D hand localization and tracking, and it thus enables motion-controlled gestures. The joint processing of 3D information with RGB image data enables finer, shape-based gestural interaction.\n Our Mime hardware prototype achieves fast and precise 3D gestural control. Compared with state-of-the-art 3D sensors like TOF cameras, the Microsoft Kinect and the Leap Motion Controller, Mime offers several key advantages for mobile applications and HMD use cases: very small size, daylight insensitivity, and low power consumption. Mime is built using standard, low-cost optoelectronic components and promises to be an inexpensive technology that can either be a peripheral component or be embedded within the HMD unit. We demonstrate the utility of the Mime sensor for HMD interaction with a variety of application scenarios, including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions.", "title": "" }, { "docid": "neg:1840300_19", "text": "On average, resource-abundant countries have experienced lower growth over the last four decades than their resource-poor counterparts. But the most interesting aspect of the paradox of plenty is not the average effect of natural resources, but its variation. For every Nigeria or Venezuela there is a Norway or a Botswana. Why do natural resources induce prosperity in some countries but stagnation in others? This paper gives an overview of the dimensions along which resource-abundant winners and losers differ. In light of this, it then discusses different theory models of the resource curse, with a particular emphasis on recent developments in political economy.", "title": "" } ]
1840301
Energy Saving Additive Neural Network
[ { "docid": "pos:1840301_0", "text": "We present a hybrid analog/digital very large scale integration (VLSI) implementation of a spiking neural network with programmable synaptic weights. The synaptic weight values are stored in an asynchronous Static Random Access Memory (SRAM) module, which is interfaced to a fast current-mode event-driven DAC for producing synaptic currents with the appropriate amplitude values. These currents are further integrated by current-mode integrator synapses to produce biophysically realistic temporal dynamics. The synapse output currents are then integrated by compact and efficient integrate and fire silicon neuron circuits with spike-frequency adaptation and adjustable refractory period and spike-reset voltage settings. The fabricated chip comprises a total of 32 × 32 SRAM cells, 4 × 32 synapse circuits and 32 × 1 silicon neurons. It acts as a transceiver, receiving asynchronous events in input, performing neural computation with hybrid analog/digital circuits on the input spikes, and eventually producing digital asynchronous events in output. Input, output, and synaptic weight values are transmitted to/from the chip using a common communication protocol based on the Address Event Representation (AER). Using this representation it is possible to interface the device to a workstation or a micro-controller and explore the effect of different types of Spike-Timing Dependent Plasticity (STDP) learning algorithms for updating the synaptic weights values in the SRAM module. We present experimental results demonstrating the correct operation of all the circuits present on the chip.", "title": "" } ]
[ { "docid": "neg:1840301_0", "text": "A low-voltage low-power CMOS operational transconductance amplifier (OTA) with near rail-to-rail output swing is presented in this brief. The proposed circuit is based on the current-mirror OTA topology. In addition, several circuit techniques are adopted to enhance the voltage gain. Simulated from a 0.8-V supply voltage, the proposed OTA achieves a 62-dB dc gain and a gain–bandwidth product of 160 MHz while driving a 2-pF load. The OTA is designed in a 0.18m CMOS process. The power consumption is 0.25 mW including the common-mode feedback circuit.", "title": "" }, { "docid": "neg:1840301_1", "text": "Product service systems (PSS) can be understood as an innovation / business strategy that includes a set of products and services that are realized by an actor network. More recently, PSS that comprise System of Systems (SoS) have been of increasing interest, notably in the transportation (autonomous vehicle infrastructures, multi-modal transportation) and energy sector (smart grids). Architecting such PSS-SoS goes beyond classic SoS engineering, as they are often driven by new technology, without an a priori client and actor network, and thus, a much larger number of potential architectures. However, it seems that neither the existing PSS nor SoS literature provides solutions to how to architect such PSS. This paper presents a methodology for architecting PSS-SoS that are driven by technological innovation. The objective is to design PSS-SoS architectures together with their value proposition and business model from an initial technology impact assessment. For this purpose, we adapt approaches from the strategic management, business modeling, PSS and SoS architecting literature. We illustrate the methodology by applying it to the case of an automobile PSS.", "title": "" }, { "docid": "neg:1840301_2", "text": "3 Computation of the shearlet transform 13 3.1 Finite discrete shearlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2 A discrete shearlet frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3 Inversion of the shearlet transform . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.4 Smooth shearlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.5 Implementation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5.1 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5.2 Computation of spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.6 Short documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.7 Download & Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.8 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.9 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32", "title": "" }, { "docid": "neg:1840301_3", "text": "In this paper we investigate the challenging problem of cursive text recognition in natural scene images. In particular, we have focused on isolated Urdu character recognition in natural scenes that could not be handled by tradition Optical Character Recognition (OCR) techniques developed for Arabic and Urdu scanned documents. We also present a dataset of Urdu characters segmented from images of signboards, street scenes, shop scenes and advertisement banners containing Urdu text. A variety of deep learning techniques have been proposed by researchers for natural scene text detection and recognition. In this work, a Convolutional Neural Network (CNN) is applied as a classifier, as CNN approaches have been reported to provide high accuracy for natural scene text detection and recognition. A dataset of manually segmented characters was developed and deep learning based data augmentation techniques were applied to further increase the size of the dataset. The training is formulated using filter sizes of 3x3, 5x5 and mixed 3x3 and 5x5 with a stride value of 1 and 2. The CNN model is trained with various learning rates and state-of-the-art results are achieved.", "title": "" }, { "docid": "neg:1840301_4", "text": "We study compression techniques for parallel in-memory graph algorithms, and show that we can achieve reduced space usage while obtaining competitive or improved performance compared to running the algorithms on uncompressed graphs. We integrate the compression techniques into Ligra, a recent shared-memory graph processing system. This system, which we call Ligra+, is able to represent graphs using about half of the space for the uncompressed graphs on average. Furthermore, Ligra+ is slightly faster than Ligra on average on a 40-core machine with hyper-threading. Our experimental study shows that Ligra+ is able to process graphs using less memory, while performing as well as or faster than Ligra.", "title": "" }, { "docid": "neg:1840301_5", "text": "Hundreds of hours of videos are uploaded every minute on YouTube and other video sharing sites: some will be viewed by millions of people and other will go unnoticed by all but the uploader. In this paper we propose to use visual sentiment and content features to predict the popularity of web videos. The proposed approach outperforms current state-of-the-art methods on two publicly available datasets.", "title": "" }, { "docid": "neg:1840301_6", "text": "This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40% while remaining highly competitive in terms of processing time.", "title": "" }, { "docid": "neg:1840301_7", "text": "Nowadays, different protocols coexist in Internet that provides services to users. Unfortunately, control decisions and distributed management make it hard to control networks. These problems result in an inefficient and unpredictable network behaviour. Software Defined Networks (SDN) is a new concept of network architecture. It intends to be more flexible and to simplify the management in networks with respect to traditional architectures. Each of these aspects are possible because of the separation of control plane (controller) and data plane (switches) in network devices. OpenFlow is the most common protocol for SDN networks that provides the communication between control and data planes. Moreover, the advantage of decoupling control and data planes enables a quick evolution of protocols and also its deployment without replacing data plane switches. In this survey, we review the SDN technology and the OpenFlow protocol and their related works. Specifically, we describe some technologies as Wireless Sensor Networks and Wireless Cellular Networks and how SDN can be included within them in order to solve their challenges. We classify different solutions for each technology attending to the problem that is being fixed.", "title": "" }, { "docid": "neg:1840301_8", "text": "In this paper, we present an attribute graph grammar for image parsing on scenes with man-made objects, such as buildings, hallways, kitchens, and living moms. We choose one class of primitives - 3D planar rectangles projected on images and six graph grammar production rules. Each production rule not only expands a node into its components, but also includes a number of equations that constrain the attributes of a parent node and those of its children. Thus our graph grammar is context sensitive. The grammar rules are used recursively to produce a large number of objects and patterns in images and thus the whole graph grammar is a type of generative model. The inference algorithm integrates bottom-up rectangle detection which activates top-down prediction using the grammar rules. The final results are validated in a Bayesian framework. The output of the inference is a hierarchical parsing graph with objects, surfaces, rectangles, and their spatial relations. In the inference, the acceptance of a grammar rule means recognition of an object, and actions are taken to pass the attributes between a node and its parent through the constraint equations associated with this production rule. When an attribute is passed from a child node to a parent node, it is called bottom-up, and the opposite is called top-down", "title": "" }, { "docid": "neg:1840301_9", "text": "Both high switching frequency and high efficiency are critical in reducing power adapter size. The active clamp flyback (ACF) topology allows zero voltage soft switching (ZVS) under all line and load conditions, eliminates leakage inductance and snubber losses, and enables high frequency and high power density power conversion. Traditional ACF ZVS operation relies on the resonance between leakage inductance and a small primary-side clamping capacitor, which leads to increased rms current and high conduction loss. This also causes oscillatory output rectifier current and impedes the implementation of synchronous rectification. This paper proposes a secondary-side resonance scheme to shape the primary current waveform in a way that significantly improves synchronous rectifier operation and reduces primary rms current. The concept is verified with a ${\\mathbf{25}}\\hbox{--}{\\text{W/in}}^{3}$ high-density 45-W adapter prototype using a monolithic gallium nitride power IC. Over 93% full-load efficiency was demonstrated at the worst case 90-V ac input and maximum full-load efficiency was 94.5%.", "title": "" }, { "docid": "neg:1840301_10", "text": "Usually benefits for transportation investments are analysed within a framework of cost-benefit analysis or its related techniques such as financial analysis, cost-effectiveness analysis, life-cycle costing, economic impact analysis, and others. While these tools are valid techniques in general, their application to intermodal transportation would underestimate the overall economic impact by missing important aspects of productivity enhancement. Intermodal transportation is an example of the so-called general purpose technologies (GPTs) that are characterized by statistically significant spillover effects. Diffusion, secondary innovations, and increased demand for specific human capital are basic features of GPTs. Eventually these features affect major macroeconomic variables, especially productivity. Recent economic literature claims that in order to study GPTs, micro and macro evidence should be combined to establish a better understanding of the connecting mechanisms from the micro level to the overall performance of an economy or the macro level. This study analyses these issues with respect to intermodal transportation. The goal is to understand the basic micro and macro mechanisms behind intermodal transportation in order to further develop a rigorous framework for evaluation of benefits from intermodal transportation. In doing so, lessons from computer simulation of the basic features of intermodal transportation are discussed and conclusions are made regarding an agenda for work in the field. 1 Dr. Yuri V. Yevdokimov, Assistant Professor of Economics and Civil Engineering, University of New Brunswick, Canada, Tel. (506) 447-3221, Fax (506) 453-4514, E-mail: yuri@unb.ca Introduction Intermodal transportation can be thought of as a process for transporting freight and passengers by means of a system of interconnected networks, involving various combinations of modes of transportation, in which all of the components are seamlessly linked and efficiently combined. Intermodal transportation is rapidly gaining acceptance as an integral component of the systems approach of conducting business in an increasingly competitive and interdependent global economy. For example, the United States Code with respect to transportation states: AIt is the policy of the United States Government to develop a National Intermodal Transportation System that is economically efficient and environmentally sound, provides the foundation for the United States to compete in the global economy and will move individuals and property in an energy efficient way. The National Intermodal Transportation System shall consist of all forms of transportation in a unified, interconnected manner, including the transportation systems of the future, to reduce energy consumption and air pollution while promoting economic development and supporting the United States= pre-eminent position in international commerce.@ (49 USC, Ch. 55, Sec. 5501, 1998) David Collenette (1997), the Transport Minister of Canada, noted: AWith population growth came development, and the relative advantages and disadvantages of the different modes changed as the transportation system became more advanced.... Intermodalism today is about safe, efficient transportation by the most appropriate combination of modes.@ (The Summit on North American Intermodal Transportation, 1997) These statements define intermodal transportation as a macroeconomic concept, because an effective transportation system is a vital factor in assuring the efficiency of an economic system as a whole. Moreover, intermodal transportation is an important socio-economic phenomenon which implies that the benefits of intermodal transportation have to be evaluated at the macroeconomic level, or at least at the regional level, involving all elements of the economic system that gain from having a more efficient transportation network in place. Defining Economic Benefits of Intermodal Transportation Traditionally, the benefits of a transportation investment have been primarily evaluated through reduced travel time and reduced vehicle maintenance and operation costs. However, according to Weisbrod and Treyz (1998), such methods underestimate the total benefits of transportation investment by Amissing other important aspects of productivity enhancement.@ It is so because transportation does not have an intrinsic purpose in itself and is rather intended to enable other economic activities such as production, consumption, leisure, and dissemination of knowledge to take place. Hence, in order to measure total economic benefits of investing in intermodal transportation, it is necessary to understand their basic relationships with different economic activities. Eventually, improvements in transportation reduce transportation costs. The immediate benefit of the reduction is the fall in total cost of production in an economic system under study which results in growth of the system=s output. This conclusion has been known in economic development literature since Tinbergen=s paper in 1957 (Tinbergen, 1957). However, the literature does not explicitly identify why transportation costs will fall. This issue is addressed in this discussion with respect to intermodal transportation. Transportation is a multiple service to multiple users. It is produced in transportation networks that provide infrastructure for economic activities. It appears that transportation networks have economies of scale. As discussed below, intermodal transportation magnifies these scale effects resulting in increasing returns to scale (IRS) of a specific nature. It implies that there are positive externalities that arise because of the scale effects, externalities that can initiate cumulative economic growth at the regional level as well as at the national level (see, for example, Brathen and Hervick, 1997, and Hussain and Westin, 1997). The phenomenon is known as a spill-over effect. Previously the effect has been evaluated through the contribution of transportation infrastructure investment to economic growth. Since Auschauer=s (1989) paper many economists have found evidence of such a contribution (see, for example, Bonaglia and Ferrara, 2000 and Khanam, 1996). Intermodal transportation as it was defined at the very beginning is more than mere improvements in transportation infrastructure. From a theoretical standpoint, it posseses some characteristics of the general-purpose technologies (GPT), and it seems appropriate to regard it as an example of the GPT, which is discussed below. It appears reasonable to study intermodal transportation as a two-way improvement of an economic system=s productivity. On the one hand, it improves current operational functions of the system. On the other hand, it expands those functions. Both improvements are achieved by consolidating different transportation systems into a seamless transportation network that utilizes the comparative advantages of different transportation modes. Improvements due to intermodal transportation are associated with the increased productivity of transportation services and a reduction in logistic costs. The former results in an increased volume of transportation per unit cost, while the latter directly reduces costs of commodity production. Expansion of the intermodal transportation network is associated with economies of scale and better accessibility to input and output markets. The overall impact of intermodal transportation can be divided into four elements: (i) an increase in the volume of transportation in an existing transportation network; (ii) a reduction in logistic costs of current operations; (iii) the economies of scale associated with transportation network expansion; (iv) better accessibility to input and output markets. These four elements are discussed below in a sequence. Increase in volume of transportation in the existing network An increase in volume of transportation can lead to economies of density a specific scale effect. The economies of density exist if an increase in the volume of transportation in the network does not require a proportional increase in all inputs of the network. Usually the phenomenon is associated with an increase in the frequency of transportation (traffic) within the existing network (see Boyer, 1998 for a formal definition, Ciccone and Hall, 1996 for general discussion of economies of density, and Fujii, Im and Mak, 1992 for examples of economies of density in transportation). In the case of intermodal transportation, economies of density are achieved through cargo containerization, cargo consolidation and computer-guiding systems at intermodal facilities. Cargo containerization and consolidation result in an increased load factor of transportation vehicles and higher capacity utilization of the transportation fixed facilities, while utilization of computer-guiding systems results in higher labour productivity. For instance, in 1994 Burlington Northern Santa Fe Railway (BNSF) introduced the Alliance Intermodal Facility at Fort Worth, Texas, into its operations between Chicago and Los Angeles. According to OmniTRAX specialists, who operates the facility, BNSF has nearly doubled its volume of throughput at the intermodal facility since 1994. First, containerization of commodities being transported plus hubbing or cargo consolidation at the intermodal facility resulted in longer trains with higher frequency. Second, all day-to-day operations at the intermodal facility are governed by the Optimization Alternatives Strategic Intermodal Scheduler (OASIS) computer system, which allowed BNSF to handle more operations with less labour. Reduction in Logistic Costs Intermodal transportation is characterized by optimal frequency of service and modal choice and increased reliability. Combined, these two features define the just-in-time delivery -a major service produced by intermodal transportation. Furthermore, Blackburn (1991) argues that just-in-time d", "title": "" }, { "docid": "neg:1840301_11", "text": "— Studies within the EHEA framework include the acquisition of skills such as the ability to learn autonomously, which requires students to devote much of their time to individual and group work to reinforce and further complement the knowledge acquired in the classroom. In order to consolidate the results obtained from classroom activities, lecturers must develop tools to encourage learning and facilitate the process of independent learning. The aim of this work is to present the use of virtual laboratories based on Easy Java Simulations to assist in the understanding and testing of electrical machines. con los usuarios integrándose fácilmente en plataformas de e-aprendizaje. Para nuestra aplicación hemos escogido el Java Ejs (Easy Java Simulations), ya que es una herramienta de software gratuita, diseñada para el desarrollo de laboratorios virtuales interactivos, dispone de elementos visuales parametrizables", "title": "" }, { "docid": "neg:1840301_12", "text": "A taxonomic revision of Australian Macrobrachium identified three species new to the Australian fauna – two undescribed species and one new record, viz. M. auratumsp. nov., M. koombooloombasp. nov., and M. mammillodactylus(Thallwitz, 1892). Eight taxa previously described by Riek (1951) are recognised as new junior subjective synonyms, viz. M. adscitum adscitum, M. atactum atactum, M. atactum ischnomorphum, M. atactum sobrinum, M. australiense crassum, M. australiense cristatum, M. australiense eupharum of M. australienseHolthuis, 1950, and M. glypticumof M. handschiniRoux, 1933. Apart from an erroneous type locality for a junior subjective synonym, there were no records to confirm the presence of M. australe(Guérin-Méneville, 1838) on the Australian continent. In total, 13 species of Macrobrachiumare recorded from the Australian continent. Keys to male developmental stages and Australian species are provided. A revised diagnosis is given for the genus. A list of 31 atypical species which do not appear to be based on fully developed males or which require re-evaluation of their generic status is provided. Terminology applied to spines and setae is revised.", "title": "" }, { "docid": "neg:1840301_13", "text": "In this letter, we propose sparsity-based coherent and noncoherent dictionaries for action recognition. First, the input data are divided into different clusters and the number of clusters depends on the number of action categories. Within each cluster, we seek data items of each action category. If the number of data items exceeds threshold in any action category, these items are labeled as coherent. In a similar way, all coherent data items from different clusters form a coherent group of each action category, and data that are not part of the coherent group belong to noncoherent group of each action category. These coherent and noncoherent groups are learned using K-singular value decomposition dictionary learning. Since the coherent group has more similarity among data, only few atoms need to be learned. In the noncoherent group, there is a high variability among the data items. So, we propose an orthogonal-projection-based selection to get optimal dictionary in order to retain maximum variance in the data. Finally, the obtained dictionary atoms of both groups in each action category are combined and then updated using the limited Broyden–Fletcher–Goldfarb–Shanno optimization algorithm. The experiments are conducted on challenging datasets HMDB51 and UCF50 with action bank features and achieve comparable result using this state-of-the-art feature.", "title": "" }, { "docid": "neg:1840301_14", "text": "Recent investigations of Field Programmable Gate Array (FPGA)-based time-to-digital converters (TDCs) have predominantly focused on improving the time resolution of the device. However, the monolithic integration of multi-channel TDCs and the achievement of high measurement throughput remain challenging issues for certain applications. In this paper, the potential of the resources provided by the Kintex-7 Xilinx FPGA is fully explored, and a new design is proposed for the implementation of a high performance multi-channel TDC system on this FPGA. Using the tapped-delay-line wave union TDC architecture, in which a negative pulse is triggered by the hit signal propagating along the carry chain, two time measurements are performed in a single carry chain within one clock cycle. The differential non-linearity and time resolution can be significantly improved by realigning the bins. The on-line calibration and on-line updating of the calibration table reduce the influence of variations of environmental conditions. The logic resources of the 6-input look-up tables in the FPGA are employed for hit signal edge detection and bubble-proof encoding, thereby allowing the TDC system to operate at the maximum allowable clock rate of the FPGA and to achieve the maximum possible measurement throughput. This resource-efficient design, in combination with a modular implementation, makes the integration of multiple channels in one FPGA practicable. Using our design, a 128-channel TDC with a dead time of 1.47 ns, a dynamic range of 360 ns, and a root-mean-square resolution of less than 10 ps was implemented in a single Kintex-7 device.", "title": "" }, { "docid": "neg:1840301_15", "text": "Rhinophyma is a subtype of rosacea characterized by nodular thickening of the skin, sebaceous gland hyperplasia, dilated pores, and in its late stage, fibrosis. Phymatous changes in rosacea are most common on the nose but can also occur on the chin (gnatophyma), ears (otophyma), and eyelids (blepharophyma). In severe cases, phymatous changes result in the loss of normal facial contours, significant disfigurement, and social isolation. Additionally, patients with profound rhinophyma can experience nare obstruction and difficulty breathing due to the weight and bulk of their nose. Treatment options for severe advanced rhinophyma include cryosurgery, partial-thickness decortication with subsequent secondary repithelialization, carbon dioxide (CO2) or erbium-doped yttrium aluminum garnet (Er:YAG) laser ablation, full-thickness resection with graft or flap reconstruction, excision by electrocautery or radio frequency, and sculpting resection using a heated Shaw scalpel. We report a severe case of rhinophyma resulting in marked facial disfigurement and nasal obstruction treated successfully using the Shaw scalpel. Rhinophymectomy using the Shaw scalpel allows for efficient and efficacious treatment of rhinophyma without the need for multiple procedures or general anesthesia and thus should be considered in patients with nare obstruction who require intervention.", "title": "" }, { "docid": "neg:1840301_16", "text": "Emerging Wi-Fi technologies are expected to cope with large amounts of traffic in dense networks. Consequently, proposals for the future IEEE 802.11ax Wi-Fi amendment include sensing threshold and transmit power adaptation, in order to improve spatial reuse. However, it is not yet understood to which extent such adaptive approaches — and which variant — would achieve a better balance between spatial reuse and the level of interference, in order to improve the network performance. Moreover, it is not clear how legacy Wi-Fi devices would be affected by new-generation Wi-Fi implementing these adaptive design parameters. In this paper we present a thorough comparative study in ns-3 for four major proposed adaptation algorithms and we compare their performance against legacy non-adaptive Wi-Fi. Additionally, we consider mixed populations where both legacy non-adaptive and new-generation adaptive populations coexist. We assume a dense indoor residential deployment and different numbers of available channels in the 5 GHz band, relevant for future IEEE 802.11ax. Our results show that for the dense scenarios considered, the algorithms do not significantly improve the overall network performance compared to the legacy baseline, as they increase the throughput of some nodes, while decreasing the throughput of others. For mixed populations in dense deployments, adaptation algorithms that improve the performance of new-generation nodes degrade the performance of legacy nodes and vice versa. This suggests that to support Wi-Fi evolution for dense deployments and consistently increase the throughput throughout the network, more sophisticated algorithms are needed, e.g. considering combinations of input parameters in current variants.", "title": "" }, { "docid": "neg:1840301_17", "text": "Continuous practices, i.e., continuous integration, delivery, and deployment, are the software development industry practices that enable organizations to frequently and reliably release new features and products. With the increasing interest in the literature on continuous practices, it is important to systematically review and synthesize the approaches, tools, challenges, and practices reported for adopting and implementing continuous practices. This paper aimed at systematically reviewing the state of the art of continuous practices to classify approaches and tools, identify challenges and practices in this regard, and identify the gaps for future research. We used the systematic literature review method for reviewing the peer-reviewed papers on continuous practices published between 2004 and June 1, 2016. We applied the thematic analysis method for analyzing the data extracted from reviewing 69 papers selected using predefined criteria. We have identified 30 approaches and associated tools, which facilitate the implementation of continuous practices in the following ways: 1) reducing build and test time in continuous integration (CI); 2) increasing visibility and awareness on build and test results in CI; 3) supporting (semi-) automated continuous testing; 4) detecting violations, flaws, and faults in CI; 5) addressing security and scalability issues in deployment pipeline; and 6) improving dependability and reliability of deployment process. We have also determined a list of critical factors, such as testing (effort and time), team awareness and transparency, good design principles, customer, highly skilled and motivated team, application domain, and appropriate infrastructure that should be carefully considered when introducing continuous practices in a given organization. The majority of the reviewed papers were validation (34.7%) and evaluation (36.2%) research types. This paper also reveals that continuous practices have been successfully applied to both greenfield and maintenance projects. Continuous practices have become an important area of software engineering research and practice. While the reported approaches, tools, and practices are addressing a wide range of challenges, there are several challenges and gaps, which require future research work for improving the capturing and reporting of contextual information in the studies reporting different aspects of continuous practices; gaining a deep understanding of how software-intensive systems should be (re-) architected to support continuous practices; and addressing the lack of knowledge and tools for engineering processes of designing and running secure deployment pipelines.", "title": "" }, { "docid": "neg:1840301_18", "text": "We presented the first single block collision attack on MD5 with complexity of 2 MD5 compressions and posted the challenge for another completely new one in 2010. Last year, Stevens presented a single block collision attack to our challenge, with complexity of 2 MD5 compressions. We really appreciate Stevens’s hard work. However, it is a pity that he had not found even a better solution than our original one, let alone a completely new one and the very optimal solution that we preserved and have been hoping that someone can find it, whose collision complexity is about 2 MD5 compressions. In this paper, we propose a method how to choose the optimal input difference for generating MD5 collision pairs. First, we divide the sufficient conditions into two classes: strong conditions and weak conditions, by the degree of difficulty for condition satisfaction. Second, we prove that there exist strong conditions in only 24 steps (one and a half rounds) under specific conditions, by utilizing the weaknesses of compression functions of MD5, which are difference inheriting and message expanding. Third, there should be no difference scaling after state word q25 so that it can result in the least number of strong conditions in each differential path, in such a way we deduce the distribution of strong conditions for each input difference pattern. Finally, we choose the input difference with the least number of strong conditions and the most number of free message words. We implement the most efficient 2-block MD5 collision attack, which needs only about 2 MD5 compressions to find a collision pair, and show a single-block collision attack with complexity 2.", "title": "" }, { "docid": "neg:1840301_19", "text": "BACKGROUND\nMidline facial clefts are rare and challenging deformities caused by failure of fusion of the medial nasal prominences. These anomalies vary in severity, and may include microform lines or midline lip notching, incomplete or complete labial clefting, nasal bifidity, or severe craniofacial bony and soft tissue anomalies with orbital hypertelorism and frontoethmoidal encephaloceles. In this study, the authors present 4 cases, classify the spectrum of midline cleft anomalies, and review our technical approaches to the surgical correction of midline cleft lip and bifid nasal deformities. Embryology and associated anomalies are discussed.\n\n\nMETHODS\nThe authors retrospectively reviewed our experience with 4 cases of midline cleft lip with and without nasal deformities of varied complexity. In addition, a comprehensive literature search was performed, identifying studies published relating to midline cleft lip and/or bifid nose deformities. Our assessment of the anomalies in our series, in conjunction with published reports, was used to establish a 5-tiered classification system. Technical approaches and clinical reports are described.\n\n\nRESULTS\nFunctional and aesthetic anatomic correction was successfully achieved in each case without complication. A classification and treatment strategy for the treatment of midline cleft lip and bifid nose deformity is presented.\n\n\nCONCLUSIONS\nThe successful treatment of midline cleft lip and bifid nose deformities first requires the identification and classification of the wide variety of anomalies. With exposure of abnormal nasolabial anatomy, the excision of redundant skin and soft tissue, anatomic approximation of cartilaginous elements, orbicularis oris muscle repair, and craniofacial osteotomy and reduction as indicated, a single-stage correction of midline cleft lip and bifid nasal deformity can be safely and effectively achieved.", "title": "" } ]
1840302
Regularizing Relation Representations by First-order Implications
[ { "docid": "pos:1840302_0", "text": "Hypernymy, textual entailment, and image captioning can be seen as special cases of a single visual-semantic hierarchy over words, sentences, and images. In this paper we advocate for explicitly modeling the partial order structure of this hierarchy. Towards this goal, we introduce a general method for learning ordered representations, and show how it can be applied to a variety of tasks involving images and language. We show that the resulting representations improve performance over current approaches for hypernym prediction and image-caption retrieval.", "title": "" }, { "docid": "pos:1840302_1", "text": "Matrix factorization approaches to relation extraction provide several attractive features: they support distant supervision, handle open schemas, and leverage unlabeled data. Unfortunately, these methods share a shortcoming with all other distantly supervised approaches: they cannot learn to extract target relations without existing data in the knowledge base, and likewise, these models are inaccurate for relations with sparse data. Rule-based extractors, on the other hand, can be easily extended to novel relations and improved for existing but inaccurate relations, through first-order formulae that capture auxiliary domain knowledge. However, usually a large set of such formulae is necessary to achieve generalization. In this paper, we introduce a paradigm for learning low-dimensional embeddings of entity-pairs and relations that combine the advantages of matrix factorization with first-order logic domain knowledge. We introduce simple approaches for estimating such embeddings, as well as a novel training algorithm to jointly optimize over factual and first-order logic information. Our results show that this method is able to learn accurate extractors with little or no distant supervision alignments, while at the same time generalizing to textual patterns that do not appear in the formulae.", "title": "" }, { "docid": "pos:1840302_2", "text": "Corpus-based distributional semantic models capture degrees of semantic relatedness among the words of very large vocabularies, but have problems with logical phenomena such as entailment, that are instead elegantly handled by model-theoretic approaches, which, in turn, do not scale up. We combine the advantages of the two views by inducing a mapping from distributional vectors of words (or sentences) into a Boolean structure of the kind in which natural language terms are assumed to denote. We evaluate this Boolean Distributional Semantic Model (BDSM) on recognizing entailment between words and sentences. The method achieves results comparable to a state-of-the-art SVM, degrades more gracefully when less training data are available and displays interesting qualitative properties.", "title": "" } ]
[ { "docid": "neg:1840302_0", "text": "Fine-grained image classification is to recognize hundreds of subcategories belonging to the same basic-level category, such as 200 subcategories belonging to the bird, which is highly challenging due to large variance in the same subcategory and small variance among different subcategories. Existing methods generally first locate the objects or parts and then discriminate which subcategory the image belongs to. However, they mainly have two limitations: 1) relying on object or part annotations which are heavily labor consuming; and 2) ignoring the spatial relationships between the object and its parts as well as among these parts, both of which are significantly helpful for finding discriminative parts. Therefore, this paper proposes the object-part attention model (OPAM) for weakly supervised fine-grained image classification and the main novelties are: 1) object-part attention model integrates two level attentions: object-level attention localizes objects of images, and part-level attention selects discriminative parts of object. Both are jointly employed to learn multi-view and multi-scale features to enhance their mutual promotion; and 2) Object-part spatial constraint model combines two spatial constraints: object spatial constraint ensures selected parts highly representative and part spatial constraint eliminates redundancy and enhances discrimination of selected parts. Both are jointly employed to exploit the subtle and local differences for distinguishing the subcategories. Importantly, neither object nor part annotations are used in our proposed approach, which avoids the heavy labor consumption of labeling. Compared with more than ten state-of-the-art methods on four widely-used datasets, our OPAM approach achieves the best performance.", "title": "" }, { "docid": "neg:1840302_1", "text": "concepts of tissue-integrated prostheses with remarkable functional advantages, innovations have resulted in dental implant solutions spanning the spectrum of dental needs. Current discussions concerning the relative merit of an implant versus a 3-unit fixed partial denture fully illustrate the possibility that single implants represent a bona fide choice for tooth replacement. Interestingly, when delving into the detailed comparisons between the outcomes of single-tooth implant versus fixed partial dentures or the intentional replacement of a failing tooth with an implant instead of restoration involving root canal therapy, little emphasis has been placed on the relative esthetic merits of one or another therapeutic approach to tooth replacement therapy. An ideal prosthesis should fully recapitulate or enhance the esthetic features of the tooth or teeth it replaces. Although it is clearly beyond the scope of this article to compare the various methods of esthetic tooth replacement, there is, perhaps, sufficient space to share some insights regarding an objective approach to planning, executing and evaluating the esthetic merit of single-tooth implant restorations.", "title": "" }, { "docid": "neg:1840302_2", "text": "High speed, low latency obstacle avoidance is essential for enabling Micro Aerial Vehicles (MAVs) to function in cluttered and dynamic environments. While other systems exist that do high-level mapping and 3D path planning for obstacle avoidance, most of these systems require high-powered CPUs on-board or off-board control from a ground station. We present a novel entirely on-board approach, leveraging a light-weight low power stereo vision system on FPGA. Our approach runs at a frame rate of 60 frames a second on VGA-sized images and minimizes latency between image acquisition and performing reactive maneuvers, allowing MAVs to fly more safely and robustly in complex environments. We also suggest our system as a light-weight safety layer for systems undertaking more complex tasks, like mapping the environment. Finally, we show our algorithm implemented on a lightweight, very computationally constrained platform, and demonstrate obstacle avoidance in a variety of environments.", "title": "" }, { "docid": "neg:1840302_3", "text": "Combinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.", "title": "" }, { "docid": "neg:1840302_4", "text": "This paper argues that we should seek the golden middle way between dynamically and statically typed languages.", "title": "" }, { "docid": "neg:1840302_5", "text": "Researchers are often confused about what can be inferred from significance tests. One problem occurs when people apply Bayesian intuitions to significance testing-two approaches that must be firmly separated. This article presents some common situations in which the approaches come to different conclusions; you can see where your intuitions initially lie. The situations include multiple testing, deciding when to stop running participants, and when a theory was thought of relative to finding out results. The interpretation of nonsignificant results has also been persistently problematic in a way that Bayesian inference can clarify. The Bayesian and orthodox approaches are placed in the context of different notions of rationality, and I accuse myself and others as having been irrational in the way we have been using statistics on a key notion of rationality. The reader is shown how to apply Bayesian inference in practice, using free online software, to allow more coherent inferences from data.", "title": "" }, { "docid": "neg:1840302_6", "text": "We describe a framework for characterizing people’s behavior with Digital Live Art. Our framework considers people’s wittingness, technical skill, and interpretive abilities in relation to the performance frame. Three key categories of behavior with respect to the performance frame are proposed: performing, participating, and spectating. We exemplify the use of our framework by characterizing people’s interaction with a DLA iPoi. This DLA is based on the ancient Maori art form of poi and employs a wireless, peer-to-peer exertion interface. The design goal of iPoi is to draw people into the performance frame and support transitions from audience to participant and on to performer. We reflect on iPoi in a public performance and outline its key design features.", "title": "" }, { "docid": "neg:1840302_7", "text": "Total quality management (TQM) has been widely considered as the strategic, tactical and operational tool in the quality management research field. It is one of the most applied and well accepted approaches for business excellence besides Continuous Quality Improvement (CQI), Six Sigma, Just-in-Time (JIT), and Supply Chain Management (SCM) approaches. There is a great enthusiasm among manufacturing and service industries in adopting and implementing this strategy in order to maintain their sustainable competitive advantage. The aim of this study is to develop and propose the conceptual framework and research model of TQM implementation in relation to company performance particularly in context with the Indian service companies. It examines the relationships between TQM and company’s performance by measuring the quality performance as performance indicator. A comprehensive review of literature on TQM and quality performance was carried out to accomplish the objectives of this study and a research model and hypotheses were generated. Two research questions and 34 hypotheses were proposed to re-validate the TQM practices. The adoption of such a theoretical model on TQM and company’s quality performance would help managers, decision makers, and practitioners of TQM in better understanding of the TQM practices and to focus on the identified practices while implementing TQM in their companies. Further, the scope for future study is to test and validate the theoretical model by collecting the primary data from the Indian service companies and using Structural Equation Modeling (SEM) approach for hypotheses testing.", "title": "" }, { "docid": "neg:1840302_8", "text": "Blockchain-based distributed computing platforms enable the trusted execution of computation—defined in the form of smart contracts—without trusted agents. Smart contracts are envisioned to have a variety of applications, ranging from financial to IoT asset tracking. Unfortunately, the development of smart contracts has proven to be extremely error prone. In practice, contracts are riddled with security vulnerabilities comprising a critical issue since bugs are by design nonfixable and contracts may handle financial assets of significant value. To facilitate the development of secure smart contracts, we have created the FSolidM framework, which allows developers to define contracts as finite state machines (FSMs) with rigorous and clear semantics. FSolidM provides an easy-to-use graphical editor for specifying FSMs, a code generator for creating Ethereum smart contracts, and a set of plugins that developers may add to their FSMs to enhance security and functionality.", "title": "" }, { "docid": "neg:1840302_9", "text": "This paper describes a novel feature selection algorithm for unsupervised clustering, that combines the clustering ensembles method and the population based incremental learning algorithm. The main idea of the proposed unsupervised feature selection algorithm is to search for a subset of all features such that the clustering algorithm trained on this feature subset can achieve the most similar clustering solution to the one obtained by an ensemble learning algorithm. In particular, a clustering solution is firstly achieved by a clustering ensembles method, then the population based incremental learning algorithm is adopted to find the feature subset that best fits the obtained clustering solution. One advantage of the proposed unsupervised feature selection algorithm is that it is dimensionality-unbiased. In addition, the proposed unsupervised feature selection algorithm leverages the consensus across multiple clustering solutions. Experimental results on several real data sets demonstrate that the proposed unsupervised feature selection algorithm is often able to obtain a better feature subset when compared with other existing unsupervised feature selection algorithms.", "title": "" }, { "docid": "neg:1840302_10", "text": "Information Security has become an important issue in data communication. Encryption algorithms have come up as a solution and play an important role in information security system. On other side, those algorithms consume a significant amount of computing resources such as CPU time, memory and battery power. Therefore it is essential to measure the performance of encryption algorithms. In this work, three encryption algorithms namely DES, AES and Blowfish are analyzed by considering certain performance metrics such as execution time, memory required for implementation and throughput. Based on the experiments, it has been concluded that the Blowfish is the best performing algorithm among the algorithms chosen for implementation.", "title": "" }, { "docid": "neg:1840302_11", "text": "Our team is currently developing an Automated Cyber Red Teaming system that, when given a model-based capture of an organisation's network, uses automated planning techniques to generate and assess multi-stage attacks. Specific to this paper, we discuss our development of the visual analytic component of this system. Through various views that display network attacks paths at different levels of abstraction, our tool aims to enhance cyber situation awareness of human decision makers.", "title": "" }, { "docid": "neg:1840302_12", "text": "BACKGROUND\nFacebook is a social networking site (SNS) for communication, entertainment and information exchange. Recent research has shown that excessive use of Facebook can result in addictive behavior in some individuals.\n\n\nAIM\nTo assess the patterns of Facebook use in post-graduate students of Yenepoya University and evaluate its association with loneliness.\n\n\nMETHODS\nA cross-sectional study was done to evaluate 100 post-graduate students of Yenepoya University using Bergen Facebook Addiction Scale (BFAS) and University of California and Los Angeles (UCLA) loneliness scale version 3. Descriptive statistics were applied. Pearson's bivariate correlation was done to see the relationship between severity of Facebook addiction and the experience of loneliness.\n\n\nRESULTS\nMore than one-fourth (26%) of the study participants had Facebook addiction and 33% had a possibility of Facebook addiction. There was a significant positive correlation between severity of Facebook addiction and extent of experience of loneliness ( r = .239, p = .017).\n\n\nCONCLUSION\nWith the rapid growth of popularity and user-base of Facebook, a significant portion of the individuals are susceptible to develop addictive behaviors related to Facebook use. Loneliness is a factor which influences addiction to Facebook.", "title": "" }, { "docid": "neg:1840302_13", "text": "A great variety of systems in nature, society and technology—from the web of sexual contacts to the Internet, from the nervous system to power grids—can be modeled as graphs of vertices coupled by edges. The network structure, describing how the graph is wired, helps us understand, predict and optimize the behavior of dynamical systems. In many cases, however, the edges are not continuously active. As an example, in networks of communication via email, text messages, or phone calls, edges represent sequences of instantaneous or practically instantaneous contacts. In some cases, edges are active for non-negligible periods of time: e.g., the proximity patterns of inpatients at hospitals can be represented by a graph where an edge between two individuals is on throughout the time they are at the same ward. Like network topology, the temporal structure of edge activations can affect dynamics of systems interacting through the network, from disease contagion on the network of patients to information diffusion over an e-mail network. In this review, we present the emergent field of temporal networks, and discuss methods for analyzing topological and temporal structure and models for elucidating their relation to the behavior of dynamical systems. In the light of traditional network theory, one can see this framework as moving the information of when things happen from the dynamical system on the network, to the network itself. Since fundamental properties, such as the transitivity of edges, do not necessarily hold in temporal networks, many of these methods need to be quite different from those for static networks. The study of temporal networks is very interdisciplinary in nature. Reflecting this, even the object of study has many names—temporal graphs, evolving graphs, time-varying graphs, time-aggregated graphs, time-stamped graphs, dynamic networks, dynamic graphs, dynamical graphs, and so on. This review covers different fields where temporal graphs are considered, but does not attempt to unify related terminology—rather, we want to make papers readable across disciplines.", "title": "" }, { "docid": "neg:1840302_14", "text": "Emotion keyword spotting approach can detect emotion well for explicit emotional contents while it obviously cannot compare to supervised learning approaches for detecting emotional contents of particular events. In this paper, we target earthquake situations in Japan as the particular events for emotion analysis because the affected people often show their states and emotions towards the situations via social networking sites. Additionally, tracking crowd emotions in the Internet during the earthquakes can help authorities to quickly decide appropriate assistance policies without paying the cost as the traditional public surveys. Our three main contributions in this paper are: a) the appropriate choice of emotions; b) the novel proposal of two classification methods for determining the earthquake related tweets and automatically identifying the emotions in Twitter; c) tracking crowd emotions during different earthquake situations, a completely new application of emotion analysis research. Our main analysis results show that Twitter users show their Fear and Anxiety right after the earthquakes occurred while Calm and Unpleasantness are not showed clearly during the small earthquakes but in the large tremor.", "title": "" }, { "docid": "neg:1840302_15", "text": "Received: 18 March 2009 Revised: 18 March 2009 Accepted: 20 April 2009 Abstract Adaptive visualization is a new approach at the crossroads of user modeling and information visualization. Taking into account information about a user, adaptive visualization attempts to provide user-adapted visual presentation of information. This paper proposes Adaptive VIBE, an approach for adaptive visualization of search results in an intelligence analysis context. Adaptive VIBE extends the popular VIBE visualization framework by infusing user model terms as reference points for spatial document arrangement and manipulation. We explored the value of the proposed approach using data obtained from a user study. The result demonstrated that user modeling and spatial visualization technologies are able to reinforce each other, creating an enhanced level of user support. Spatial visualization amplifies the user model's ability to separate relevant and non-relevant documents, whereas user modeling adds valuable reference points to relevance-based spatial visualization. Information Visualization (2009) 8, 167--179. doi:10.1057/ivs.2009.12", "title": "" }, { "docid": "neg:1840302_16", "text": "Given the unprecedented availability of data and computing resources, there is widespread renewed interest in applying data-driven machine learning methods to problems for which the development of conventional engineering solutions is challenged by modeling or algorithmic deficiencies. This tutorial-style paper starts by addressing the questions of why and when such techniques can be useful. It then provides a high-level introduction to the basics of supervised and unsupervised learning. For both supervised and unsupervised learning, exemplifying applications to communication networks are discussed by distinguishing tasks carried out at the edge and at the cloud segments of the network at different layers of the protocol stack, with an emphasis on the physical layer.", "title": "" }, { "docid": "neg:1840302_17", "text": "UNLABELLED\nThe limit of the Colletotrichum gloeosporioides species complex is defined genetically, based on a strongly supported clade within the Colletotrichum ITS gene tree. All taxa accepted within this clade are morphologically more or less typical of the broadly defined C. gloeosporioides, as it has been applied in the literature for the past 50 years. We accept 22 species plus one subspecies within the C. gloeosporioides complex. These include C. asianum, C. cordylinicola, C. fructicola, C. gloeosporioides, C. horii, C. kahawae subsp. kahawae, C. musae, C. nupharicola, C. psidii, C. siamense, C. theobromicola, C. tropicale, and C. xanthorrhoeae, along with the taxa described here as new, C. aenigma, C. aeschynomenes, C. alatae, C. alienum, C. aotearoa, C. clidemiae, C. kahawae subsp. ciggaro, C. salsolae, and C. ti, plus the nom. nov. C. queenslandicum (for C. gloeosporioides var. minus). All of the taxa are defined genetically on the basis of multi-gene phylogenies. Brief morphological descriptions are provided for species where no modern description is available. Many of the species are unable to be reliably distinguished using ITS, the official barcoding gene for fungi. Particularly problematic are a set of species genetically close to C. musae and another set of species genetically close to C. kahawae, referred to here as the Musae clade and the Kahawae clade, respectively. Each clade contains several species that are phylogenetically well supported in multi-gene analyses, but within the clades branch lengths are short because of the small number of phylogenetically informative characters, and in a few cases individual gene trees are incongruent. Some single genes or combinations of genes, such as glyceraldehyde-3-phosphate dehydrogenase and glutamine synthetase, can be used to reliably distinguish most taxa and will need to be developed as secondary barcodes for species level identification, which is important because many of these fungi are of biosecurity significance. In addition to the accepted species, notes are provided for names where a possible close relationship with C. gloeosporioides sensu lato has been suggested in the recent literature, along with all subspecific taxa and formae speciales within C. gloeosporioides and its putative teleomorph Glomerella cingulata.\n\n\nTAXONOMIC NOVELTIES\nName replacement - C. queenslandicum B. Weir & P.R. Johnst. New species - C. aenigma B. Weir & P.R. Johnst., C. aeschynomenes B. Weir & P.R. Johnst., C. alatae B. Weir & P.R. Johnst., C. alienum B. Weir & P.R. Johnst, C. aotearoa B. Weir & P.R. Johnst., C. clidemiae B. Weir & P.R. Johnst., C. salsolae B. Weir & P.R. Johnst., C. ti B. Weir & P.R. Johnst. New subspecies - C. kahawae subsp. ciggaro B. Weir & P.R. Johnst. Typification: Epitypification - C. queenslandicum B. Weir & P.R. Johnst.", "title": "" }, { "docid": "neg:1840302_18", "text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840302_19", "text": "Homogeneity analysis combines the idea of maximizing the correlations between variables of a multivariate data set with that of optimal scaling. In this article we present methodological and practical issues of the R package homals which performs homogeneity analysis and various extensions. By setting rank constraints nonlinear principal component analysis can be performed. The variables can be partitioned into sets such that homogeneity analysis is extended to nonlinear canonical correlation analysis or to predictive models which emulate discriminant analysis and regression models. For each model the scale level of the variables can be taken into account by setting level constraints. All algorithms allow for missing values.", "title": "" } ]
1840303
Dataset for forensic analysis of B-tree file system
[ { "docid": "pos:1840303_0", "text": "Most of the effort in today’s digital forensics community lies in the retrieval and analysis of existing information from computing systems. Little is being done to increase the quantity and quality of the forensic information on today’s computing systems. In this paper we pose the question of what kind of information is desired on a system by a forensic investigator. We give an overview of the information that exists on current systems and discuss its shortcomings. We then examine the role that file system metadata plays in digital forensics and analyze what kind of information is desirable for different types of forensic investigations, how feasible it is to obtain it, and discuss issues about storing the information.", "title": "" } ]
[ { "docid": "neg:1840303_0", "text": "This paper considers how design fictions in the form of 'imaginary abstracts' can be extended into complete 'fictional papers'. Imaginary abstracts are a type of design fiction that are usually included within the content of 'real' research papers, they comprise brief accounts of fictional problem frames, prototypes, user studies and findings. Design fiction abstracts have been proposed as a means to move beyond solutionism to explore the potential societal value and consequences of new HCI concepts. In this paper we contrast the properties of imaginary abstracts, with the properties of a published paper that presents fictional research, Game of Drones. Extending the notion of imaginary abstracts so that rather than including fictional abstracts within a 'non-fiction' research paper, Game of Drones is fiction from start to finish (except for the concluding paragraph where the fictional nature of the paper is revealed). In this paper we review the scope of design fiction in HCI research before contrasting the properties of imaginary abstracts with the properties of our example fictional research paper. We argue that there are clear merits and weaknesses to both approaches, but when used tactfully and carefully fictional research papers may further empower HCI's burgeoning design discourse with compelling new methods.", "title": "" }, { "docid": "neg:1840303_1", "text": "A novel method of on-line 2,2′-Azinobis-(3-ethylbenzthiazoline-6-sulphonate)-Capillary Electrophoresis-Diode Array Detector (on-line ABTS+-CE-DAD) was developed to screen the major antioxidants from complex herbal medicines. ABTS+, one of well-known oxygen free radicals was firstly integrated into the capillary. For simultaneously detecting and separating ABTS+ and chemical components of herb medicines, some conditions were optimized. The on-line ABTS+-CE-DAD method has successfully been used to screen the main antioxidants from Shuxuening injection (SI), an herbal medicines injection. Under the optimum conditions, nine ingredients of SI including clitorin, rutin, isoquercitrin, Quercetin-3-O-D-glucosyl]-(1-2)-L-rhamnoside, kaempferol-3-O-rutinoside, kaempferol-7-O-β-D-glucopyranoside, apigenin-7-O-Glucoside, quercetin-3-O-[2-O-(6-O-p-hydroxyl-E-coumaroyl)-D-glucosyl]-(1-2)-L-rhamnoside, 3-O-{2-O-[6-O-(p-hydroxyl-E-coumaroyl)-glucosyl]}-(1-2) rhamnosyl kaempfero were separated and identified as the major antioxidants. There is a linear relationship between the total amount of major antioxidants and total antioxidative activity of SI with a linear correlation coefficient of 0.9456. All the Relative standard deviations of recovery, precision and stability were below 7.5%. Based on these results, these nine ingredients could be selected as combinatorial markers to evaluate quality control of SI. It was concluded that on-line ABTS+-CE-DAD method was a simple, reliable and powerful tool to screen and quantify active ingredients for evaluating quality of herbal medicines.", "title": "" }, { "docid": "neg:1840303_2", "text": "In this paper, we discuss the problem of automatic skin lesion analysis, specifically melanoma detection and semantic segmentation. We accomplish this by using deep learning techniques to perform classification on publicly available dermoscopic images. Skin cancer, of which melanoma is a type, is the most prevalent form of cancer in the US and more than four million cases are diagnosed in the US every year. In this work, we present our efforts towards an accessible, deep learning-based system that can be used for skin lesion classification, thus leading to an improved melanoma screening system. For classification, a deep convolutional neural network architecture is first implemented over the raw images. In addition, hand-coded features such as 166-D color histogram distribution, edge histogram and Multiscale Color local binary patterns are extracted from the images and presented to a random forest classifier. The average of the outputs from the two mentioned classifiers is taken as the final classification result. The classification task achieves an accuracy of 80.3%, AUC score of 0.69 and a precision score of 0.81. For segmentation, we implement a convolutional-deconvolutional architecture and the segmentation model achieves a Dice coefficient of 73.5%.", "title": "" }, { "docid": "neg:1840303_3", "text": "Recently, neural-network based word embedding models have been shown to produce high-quality distributional representations capturing both semantic and syntactic information. In this paper, we propose a grouping-based context predictive model by considering the interactions of context words, which generalizes the widely used CBOW model and Skip-Gram model. In particular, the words within a context window are split into several groups with a grouping function, where words in the same group are combined while different groups are treated as independent. To determine the grouping function, we propose a relatedness hypothesis stating the relationship among context words and propose several context grouping methods. Experimental results demonstrate better representations can be learned with suitable context groups.", "title": "" }, { "docid": "neg:1840303_4", "text": "Document classification tasks were primarily tackled at word level. Recent research that works with character-level inputs shows several benefits over word-level approaches such as natural incorporation of morphemes and better handling of rare words. We propose a neural network architecture that utilizes both convolution and recurrent layers to efficiently encode character inputs. We validate the proposed model on eight large scale document classification tasks and compare with character-level convolution-only models. It achieves comparable performances with much less parameters.", "title": "" }, { "docid": "neg:1840303_5", "text": "Integration of knowledge concerning circadian rhythms, metabolic networks, and sleep-wake cycles is imperative for unraveling the mysteries of biological cycles and their underlying mechanisms. During the last decade, enormous progress in circadian biology research has provided a plethora of new insights into the molecular architecture of circadian clocks. However, the recent identification of autonomous redox oscillations in cells has expanded our view of the clockwork beyond conventional transcription/translation feedback loop models, which have been dominant since the first circadian period mutants were identified in fruit fly. Consequently, non-transcriptional timekeeping mechanisms have been proposed, and the antioxidant peroxiredoxin proteins have been identified as conserved markers for 24-hour rhythms. Here, we review recent advances in our understanding of interdependencies amongst circadian rhythms, sleep homeostasis, redox cycles, and other cellular metabolic networks. We speculate that systems-level investigations implementing integrated multi-omics approaches could provide novel mechanistic insights into the connectivity between daily cycles and metabolic systems.", "title": "" }, { "docid": "neg:1840303_6", "text": "Utilization of polymers as biomaterials has greatly impacted the advancement of modern medicine. Specifically, polymeric biomaterials that are biodegradable provide the significant advantage of being able to be broken down and removed after they have served their function. Applications are wide ranging with degradable polymers being used clinically as surgical sutures and implants. In order to fit functional demand, materials with desired physical, chemical, biological, biomechanical and degradation properties must be selected. Fortunately, a wide range of natural and synthetic degradable polymers has been investigated for biomedical applications with novel materials constantly being developed to meet new challenges. This review summarizes the most recent advances in the field over the past 4 years, specifically highlighting new and interesting discoveries in tissue engineering and drug delivery applications.", "title": "" }, { "docid": "neg:1840303_7", "text": "Species extinctions pose serious threats to the functioning of ecological communities worldwide. We used two qualitative and quantitative pollination networks to simulate extinction patterns following three removal scenarios: random removal and systematic removal of the strongest and weakest interactors. We accounted for pollinator behaviour by including potential links into temporal snapshots (12 consecutive 2-week networks) to reflect mutualists' ability to 'switch' interaction partners (re-wiring). Qualitative data suggested a linear or slower than linear secondary extinction while quantitative data showed sigmoidal decline of plant interaction strength upon removal of the strongest interactor. Temporal snapshots indicated greater stability of re-wired networks over static systems. Tolerance of generalized networks to species extinctions was high in the random removal scenario, with an increase in network stability if species formed new interactions. Anthropogenic disturbance, however, that promote the extinction of the strongest interactors might induce a sudden collapse of pollination networks.", "title": "" }, { "docid": "neg:1840303_8", "text": "Storm has long served as the main platform for real-time analytics at Twitter. However, as the scale of data being processed in real-time at Twitter has increased, along with an increase in the diversity and the number of use cases, many limitations of Storm have become apparent. We need a system that scales better, has better debug-ability, has better performance, and is easier to manage -- all while working in a shared cluster infrastructure. We considered various alternatives to meet these needs, and in the end concluded that we needed to build a new real-time stream data processing system. This paper presents the design and implementation of this new system, called Heron. Heron is now the de facto stream data processing engine inside Twitter, and in this paper we also share our experiences from running Heron in production. In this paper, we also provide empirical evidence demonstrating the efficiency and scalability of Heron.", "title": "" }, { "docid": "neg:1840303_9", "text": "Within the research on Micro Aerial Vehicles (MAVs), the field on flight control and autonomous mission execution is one of the most active. A crucial point is the localization of the vehicle, which is especially difficult in unknown, GPS-denied environments. This paper presents a novel vision based approach, where the vehicle is localized using a downward looking monocular camera. A state-of-the-art visual SLAM algorithm tracks the pose of the camera, while, simultaneously, building an incremental map of the surrounding region. Based on this pose estimation a LQG/LTR based controller stabilizes the vehicle at a desired setpoint, making simple maneuvers possible like take-off, hovering, setpoint following or landing. Experimental data show that this approach efficiently controls a helicopter while navigating through an unknown and unstructured environment. To the best of our knowledge, this is the first work describing a micro aerial vehicle able to navigate through an unexplored environment (independently of any external aid like GPS or artificial beacons), which uses a single camera as only exteroceptive sensor.", "title": "" }, { "docid": "neg:1840303_10", "text": "Zusammenfassung Es wird eine neuartige hybride Systemarchitektur für kontinuierliche Steuerungsund Regelungssysteme mit diskreten Entscheidungsfindungsprozessen vorgestellt. Die Funktionsweise wird beispielhaft für das hochautomatisierte Fahren auf Autobahnen und den Nothalteassistenten dargestellt. Da für einen zukünftigen Einsatz derartiger Systeme deren Robustheit entscheidend ist, wurde diese bei der Entwicklung des Ansatzes in den Mittelpunkt gestellt. Summary An innovative hybrid system structure for continuous control systems with discrete decisionmaking processes is presented. The functionality is demonstrated on a highly automated driving system on freeways and on the emergency stop assistant. Due to the fact that the robustness will be a determining factor for future usage of these systems, the presented structure focuses on this feature.", "title": "" }, { "docid": "neg:1840303_11", "text": "Multiangle social network recommendation algorithms (MSN) and a new assessment method, called similarity network evaluation (SNE), are both proposed. From the viewpoint of six dimensions, the MSN are classified into six algorithms, including user-based algorithm from resource point (UBR), user-based algorithm from tag point (UBT), resource-based algorithm from tag point (RBT), resource-based algorithm from user point (RBU), tag-based algorithm from resource point (TBR), and tag-based algorithm from user point (TBU). Compared with the traditional recall/precision (RP) method, the SNE is more simple, effective, and visualized. The simulation results show that TBR and UBR are the best algorithms, RBU and TBU are the worst ones, and UBT and RBT are in the medium levels.", "title": "" }, { "docid": "neg:1840303_12", "text": "A 20-MHz to 3-GHz wide-range multiphase delay-locked loop (DLL) has been realized in 90-nm CMOS technology. The proposed delay cell extends the operation frequency range. A scaling circuit is adopted to lower the large delay gain when the frequency of the input clock is low. The core area of this DLL is 0.005 mm2. The measured power consumption values are 0.4 and 3.6 mW for input clocks of 20 MHz and 3 GHz, respectively. The measured peak-to-peak and root-mean-square jitters are 2.3 and 16 ps at 3 GHz, respectively.", "title": "" }, { "docid": "neg:1840303_13", "text": "Liang Zhou, member IEEE and YiFeng Wu, member IEEE Transphorm, Inc. 75 Castilian Dr., Goleta, CA, 93117 USA lzhou@transphormusa.com Abstract: This paper presents a true bridgeless totem-pole Power-Factor-Correction (PFC) circuit using GaN HEMT. Enabled by a diode-free GaN power HEMT bridge with low reverse-recovery charge, very-high-efficiency single-phase AC-DC conversion is realized using a totem-pole topology without the limit of forward voltage drop from a fast diode. When implemented with a pair of sync-rec MOSFETs for line rectification, 99% efficiency is achieved at 230V ac input and 400 dc output in continuous-current mode.", "title": "" }, { "docid": "neg:1840303_14", "text": "In this letter, a multifeed tightly coupled patch array antenna capable of broadband operation is analyzed and designed. First, an antenna array composed of infinite elements with each element excited by a feed is proposed. To produce specific polarized radiation efficiently, a new patch element is proposed, and its characteristics are studied based on a 2-port network model. Full-wave simulation results show that the infinite antenna array exhibits both a high efficiency and desirable radiation pattern in a wide frequency band (10 dB bandwidth) from 1.91 to 5.35 GHz (94.8%). Second, to validate its outstanding performance, a realistic finite 4 × 4 antenna prototype is designed, fabricated, and measured in our laboratory. The experimental results agree well with simulated ones, where the frequency bandwidth (VSWR < 2) is from 2.5 to 3.8 GHz (41.3%). The inherent compact size, light weight, broad bandwidth, and good radiation characteristics make this array antenna a promising candidate for future communication and advanced sensing systems.", "title": "" }, { "docid": "neg:1840303_15", "text": "Online social data like user-generated content, expressed or implicit relations among people, and behavioral traces are at the core of many popular web applications and platforms, driving the research agenda of researchers in both academia and industry. The promises of social data are many, including the understanding of \"what the world thinks»» about a social issue, brand, product, celebrity, or other entity, as well as enabling better decision-making in a variety of fields including public policy, healthcare, and economics. However, many academics and practitioners are increasingly warning against the naive usage of social data. They highlight that there are biases and inaccuracies occurring at the source of the data, but also introduced during data processing pipeline; there are methodological limitations and pitfalls, as well as ethical boundaries and unexpected outcomes that are often overlooked. Such an overlook can lead to wrong or inappropriate results that can be consequential.", "title": "" }, { "docid": "neg:1840303_16", "text": "“Social TV” is a term that broadly describes the online social interactions occurring between viewers while watching television. In this paper, we show that TV networks can derive value from social media content placed in shows because it leads to increased word of mouth via online posts, and it highly correlates with TV show related sales. In short, we show that TV event triggers change the online behavior of viewers. In this paper, we first show that using social media content on the televised American reality singing competition, The Voice, led to increased social media engagement during the TV broadcast. We then illustrate that social media buzz about a contestant after a performance is highly correlated with song sales from that contestant’s performance. We believe this to be the first study linking TV content to buzz and sales in real time.", "title": "" }, { "docid": "neg:1840303_17", "text": "Pedestrian detection based on the combination of convolutional neural network (CNN) and traditional handcrafted features (i.e., HOG+LUV) has achieved great success. In general, HOG+LUV are used to generate the candidate proposals and then CNN classifies these proposals. Despite its success, there is still room for improvement. For example, CNN classifies these proposals by the fully connected layer features, while proposal scores and the features in the inner-layers of CNN are ignored. In this paper, we propose a unifying framework called multi-layer channel features (MCF) to overcome the drawback. It first integrates HOG+LUV with each layer of CNN into a multi-layer image channels. Based on the multi-layer image channels, a multi-stage cascade AdaBoost is then learned. The weak classifiers in each stage of the multi-stage cascade are learned from the image channels of corresponding layer. Experiments on Caltech data set, INRIA data set, ETH data set, TUD-Brussels data set, and KITTI data set are conducted. With more abundant features, an MCF achieves the state of the art on Caltech pedestrian data set (i.e., 10.40% miss rate). Using new and accurate annotations, an MCF achieves 7.98% miss rate. As many non-pedestrian detection windows can be quickly rejected by the first few stages, it accelerates detection speed by 1.43 times. By eliminating the highly overlapped detection windows with lower scores after the first stage, it is 4.07 times faster than negligible performance loss.", "title": "" }, { "docid": "neg:1840303_18", "text": "This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom.", "title": "" } ]
1840304
CAML: Fast Context Adaptation via Meta-Learning
[ { "docid": "pos:1840304_0", "text": "Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.", "title": "" } ]
[ { "docid": "neg:1840304_0", "text": "We propose a vision-based method that localizes a ground vehicle using publicly available satellite imagery as the only prior knowledge of the environment. Our approach takes as input a sequence of ground-level images acquired by the vehicle as it navigates, and outputs an estimate of the vehicle's pose relative to a georeferenced satellite image. We overcome the significant viewpoint and appearance variations between the images through a neural multi-view model that learns location-discriminative embeddings in which ground-level images are matched with their corresponding satellite view of the scene. We use this learned function as an observation model in a filtering framework to maintain a distribution over the vehicle's pose. We evaluate our method on different benchmark datasets and demonstrate its ability localize ground-level images in environments novel relative to training, despite the challenges of significant viewpoint and appearance variations.", "title": "" }, { "docid": "neg:1840304_1", "text": "We propose a powerful new tool for conducting research on computational intelligence and games. `PyVGDL' is a simple, high-level description language for 2D video games, and the accompanying software library permits parsing and instantly playing those games. The streamlined design of the language is based on defining locations and dynamics for simple building blocks, and the interaction effects when such objects collide, all of which are provided in a rich ontology. It can be used to quickly design games, without needing to deal with control structures, and the concise language is also accessible to generative approaches. We show how the dynamics of many classical games can be generated from a few lines of PyVGDL. The main objective of these generated games is to serve as diverse benchmark problems for learning and planning algorithms; so we provide a collection of interfaces for different types of learning agents, with visual or abstract observations, from a global or first-person viewpoint. To demonstrate the library's usefulness in a broad range of learning scenarios, we show how to learn competent behaviors when a model of the game dynamics is available or when it is not, when full state information is given to the agent or just subjective observations, when learning is interactive or in batch-mode, and for a number of different learning algorithms, including reinforcement learning and evolutionary search.", "title": "" }, { "docid": "neg:1840304_2", "text": "This paper proposes a DNS Name Autoconfiguration (called DNSNA) for not only the global DNS names, but also the local DNS names of Internet of Things (IoT) devices. Since there exist so many devices in the IoT environments, it is inefficient to manually configure the Domain Name System (DNS) names of such IoT devices. By this scheme, the DNS names of IoT devices can be autoconfigured with the device's category and model in IPv6-based IoT environments. This DNS name lets user easily identify each IoT device for monitoring and remote-controlling in IoT environments. In the procedure to generate and register an IoT device's DNS name, the standard protocols of Internet Engineering Task Force (IETF) are used. Since the proposed scheme resolves an IoT device's DNS name into an IPv6 address in unicast through an authoritative DNS server, it generates less traffic than Multicast DNS (mDNS), which is a legacy DNS application for the DNS name service in IoT environments. Thus, the proposed scheme is more appropriate in global IoT networks than mDNS. This paper explains the design of the proposed scheme and its service scenario, such as smart road and smart home. The results of the simulation prove that our proposal outperforms the legacy scheme in terms of energy consumption.", "title": "" }, { "docid": "neg:1840304_3", "text": "CONTEXT\nYouth worldwide play violent video games many hours per week. Previous research suggests that such exposure can increase physical aggression.\n\n\nOBJECTIVE\nWe tested whether high exposure to violent video games increases physical aggression over time in both high- (United States) and low- (Japan) violence cultures. We hypothesized that the amount of exposure to violent video games early in a school year would predict changes in physical aggressiveness assessed later in the school year, even after statistically controlling for gender and previous physical aggressiveness.\n\n\nDESIGN\nIn 3 independent samples, participants' video game habits and physically aggressive behavior tendencies were assessed at 2 points in time, separated by 3 to 6 months.\n\n\nPARTICIPANTS\nOne sample consisted of 181 Japanese junior high students ranging in age from 12 to 15 years. A second Japanese sample consisted of 1050 students ranging in age from 13 to 18 years. The third sample consisted of 364 United States 3rd-, 4th-, and 5th-graders ranging in age from 9 to 12 years. RESULTS. Habitual violent video game play early in the school year predicted later aggression, even after controlling for gender and previous aggressiveness in each sample. Those who played a lot of violent video games became relatively more physically aggressive. Multisample structure equation modeling revealed that this longitudinal effect was of a similar magnitude in the United States and Japan for similar-aged youth and was smaller (but still significant) in the sample that included older youth.\n\n\nCONCLUSIONS\nThese longitudinal results confirm earlier experimental and cross-sectional studies that had suggested that playing violent video games is a significant risk factor for later physically aggressive behavior and that this violent video game effect on youth generalizes across very different cultures. As a whole, the research strongly suggests reducing the exposure of youth to this risk factor.", "title": "" }, { "docid": "neg:1840304_4", "text": "The in-flight alignment is a critical stage for airborne inertial navigation system/Global Positioning System (INS/GPS) applications. The alignment task is usually carried out by the Kalman filtering technique that necessitates a good initial attitude to obtain a satisfying performance. Due to the airborne dynamics, the in-flight alignment is much more difficult than the alignment on the ground. An optimization-based coarse alignment approach that uses GPS position/velocity as input, founded on the newly-derived velocity/position integration formulae is proposed. Simulation and flight test results show that, with the GPS lever arm well handled, it is potentially able to yield the initial heading up to 1 deg accuracy in 10 s. It can serve as a nice coarse in-flight alignment without any prior attitude information for the subsequent fine Kalman alignment. The approach can also be applied to other applications that require aligning the INS on the run.", "title": "" }, { "docid": "neg:1840304_5", "text": "Intensity-based classification of MR images has proven problematic, even when advanced techniques are used. Intrascan and interscan intensity inhomogeneities are a common source of difficulty. While reported methods have had some success in correcting intrascan inhomogeneities, such methods require supervision for the individual scan. This paper describes a new method called adaptive segmentation that uses knowledge of tissue intensity properties and intensity inhomogeneities to correct and segment MR images. Use of the expectation-maximization (EM) algorithm leads to a method that allows for more accurate segmentation of tissue types as well as better visualization of magnetic resonance imaging (MRI) data, that has proven to be effective in a study that includes more than 1000 brain scans. Implementation and results are described for segmenting the brain in the following types of images: axial (dual-echo spin-echo), coronal [three dimensional Fourier transform (3-DFT) gradient-echo T1-weighted] all using a conventional head coil, and a sagittal section acquired using a surface coil. The accuracy of adaptive segmentation was found to be comparable with manual segmentation, and closer to manual segmentation than supervised multivariant classification while segmenting gray and white matter.", "title": "" }, { "docid": "neg:1840304_6", "text": "Chinese word segmentation (CWS) is an important task for Chinese NLP. Recently, many neural network based methods have been proposed for CWS. However, these methods require a large number of labeled sentences for model training, and usually cannot utilize the useful information in Chinese dictionary. In this paper, we propose two methods to exploit the dictionary information for CWS. The first one is based on pseudo labeled data generation, and the second one is based on multi-task learning. The experimental results on two benchmark datasets validate that our approach can effectively improve the performance of Chinese word segmentation, especially when training data is insufficient.", "title": "" }, { "docid": "neg:1840304_7", "text": "Arhinia is a rare condition characterised by the congenital absence of nasal structures, with different patterns of presentation, and often associated with other craniofacial or somatic anomalies. To date, about 30 surviving cases have been reported. We report the case of a female patient aged 6 years, who underwent internal and external nose reconstruction using a staged procedure: a nasal airway was obtained through maxillary osteotomy and ostectomy, and lined with a local skin flap and split-thickness skin grafts; then the external nose was reconstructed with an expanded frontal flap, armed with an autogenous rib framework.", "title": "" }, { "docid": "neg:1840304_8", "text": "This article analyzes two decades of research regarding the mass media's role in shaping, perpetuating, and reducing the stigma of mental illness. It concentrates on three broad areas common in media inquiry: production, representation, and audiences. The analysis reveals that descriptions of mental illness and the mentally ill are distorted due to inaccuracies, exaggerations, or misinformation. The ill are presented not only as peculiar and different, but also as dangerous. Thus, the media perpetuate misconceptions and stigma. Especially prominent is the absence of agreed-upon definitions of \"mental illness,\" as well as the lack of research on the inter-relationships in audience studies between portrayals in the media and social perceptions. The analysis concludes with suggestions for further research on mass media's inter-relationships with mental illness.", "title": "" }, { "docid": "neg:1840304_9", "text": "The problem of detecting rectangular structures in images arises in many applications, from building extraction in aerial images to particle detection in cryo-electron microscopy. This paper proposes a new technique for rectangle detection using a windowed Hough transform. Every pixel of the image is scanned, and a sliding window is used to compute the Hough transform of small regions of the image. Peaks of the Hough image (which correspond to line segments) are then extracted, and a rectangle is detected when four extracted peaks satisfy certain geometric conditions. Experimental results indicate that the proposed technique produced promising results for both synthetic and natural images.", "title": "" }, { "docid": "neg:1840304_10", "text": "The mainstay of diagnosis for Treponema pallidum infections is based on nontreponemal and treponemal serologic tests. Many new diagnostic methods for syphilis have been developed, using specific treponemal antigens and novel formats, including rapid point-of-care tests, enzyme immunoassays, and chemiluminescence assays. Although most of these newer tests are not yet cleared for use in the United States by the Food and Drug Administration, their performance and ease of automation have promoted their application for syphilis screening. Both sensitive and specific, new screening tests detect antitreponemal IgM and IgG antibodies by use of wild-type or recombinant T. pallidum antigens. However, these tests cannot distinguish between recent and remote or treated versus untreated infections. In addition, the screening tests require confirmation with nontreponemal tests. This use of treponemal tests for screening and nontreponemal serologic tests as confirmatory tests is a reversal of long-held practice. Clinicians need to understand the science behind these tests to use them properly in syphilis management.", "title": "" }, { "docid": "neg:1840304_11", "text": "A current-biased voltage-programmed (CBVP) pixel circuit for active-matrix organic light-emitting diode (AMOLED) displays is proposed. The pixel circuit can not only ensure an accurate and fast compensation for the threshold voltage variation and degeneration of the driving TFT and the OLED, but also provide the OLED with a negative bias during the programming period. The negative bias prevents the OLED from a possible light emitting during the programming period and potentially suppresses the degradation of the OLED.", "title": "" }, { "docid": "neg:1840304_12", "text": "This paper provides a review on some of the significant research work done on abstractive text summarization. The process of generating the summary from one or more text corpus, by keeping the key points in the corpus is called text summarization. The most prominent technique in text summarization is an abstractive and extractive method. The extractive summarization is purely based on the algorithm and it just copies the most relevant sentence/words from the input text corpus and creating the summary. An abstractive method generates new sentences/words that may/may not be in the input corpus. This paper focuses on the abstractive text summarization. This paper explains the overview of the various processes in abstractive text summarization. It includes data processing, word embedding, basic model architecture, training, and validation process and the paper narrates the current research in this field. It includes different types of architectures, attention mechanism, supervised and reinforcement learning, the pros and cons of different architecture. Systematic comparison of different text summarization models will provide the future direction of text summarization.", "title": "" }, { "docid": "neg:1840304_13", "text": "In line with cloud computing emergence as the dominant enterprise computing paradigm, our conceptualization of the cloud computing reference architecture and service construction has also evolved. For example, to address the need for cost reduction and rapid provisioning, virtualization has moved beyond hardware to containers. More recently, serverless computing or Function-as-a-Service has been presented as a means to introduce further cost-efficiencies, reduce configuration and management overheads, and rapidly increase an application's ability to speed up, scale up and scale down in the cloud. The potential of this new computation model is reflected in the introduction of serverless computing platforms by the main hyperscale cloud service providers. This paper provides an overview and multi-level feature analysis of seven enterprise serverless computing platforms. It reviews extant research on these platforms and identifies the emergence of AWS Lambda as a de facto base platform for research on enterprise serverless cloud computing. The paper concludes with a summary of avenues for further research.", "title": "" }, { "docid": "neg:1840304_14", "text": "View-based 3-D object retrieval and recognition has become popular in practice, e.g., in computer aided design. It is difficult to precisely estimate the distance between two objects represented by multiple views. Thus, current view-based 3-D object retrieval and recognition methods may not perform well. In this paper, we propose a hypergraph analysis approach to address this problem by avoiding the estimation of the distance between objects. In particular, we construct multiple hypergraphs for a set of 3-D objects based on their 2-D views. In these hypergraphs, each vertex is an object, and each edge is a cluster of views. Therefore, an edge connects multiple vertices. We define the weight of each edge based on the similarities between any two views within the cluster. Retrieval and recognition are performed based on the hypergraphs. Therefore, our method can explore the higher order relationship among objects and does not use the distance between objects. We conduct experiments on the National Taiwan University 3-D model dataset and the ETH 3-D object collection. Experimental results demonstrate the effectiveness of the proposed method by comparing with the state-of-the-art methods.", "title": "" }, { "docid": "neg:1840304_15", "text": "Apollonian circle packings arise by repeatedly filling the interstices between mutually tangent circles with further tangent circles. It is possible for every circle in such a packing to have integer radius of curvature, and we call such a packing an integral Apollonian circle packing. This paper studies number-theoretic properties of the set of integer curvatures appearing in such packings. Each Descartes quadruple of four tangent circles in the packing gives an integer solution to the Descartes equation, which relates the radii of curvature of four mutually tangent circles: x þ y þ z þ w 1⁄4 1 2 ðx þ y þ z þ wÞ: Each integral Apollonian circle packing is classified by a certain root quadruple of integers that satisfies the Descartes equation, and that corresponds to a particular quadruple of circles appearing in the packing. We express the number of root quadruples with fixed minimal element n as a class number, and give an exact formula for it. We study which integers occur in a given integer packing, and determine congruence restrictions which sometimes apply. We present evidence suggesting that the set of integer radii of curvatures that appear in an integral Apollonian circle packing has positive density, and in fact represents all sufficiently large integers not excluded by Corresponding author. E-mail addresses: graham@ucsd.edu (R.L. Graham), jcl@research.att.com (J.C. Lagarias), colinm@ research.avayalabs.com (C.L. Mallows), allan@research.att.com (A.R. Wilks), catherine.yan@math. tamu.edu (C.H. Yan). 1 Current address: Department of Computer Science, University of California at San Diego, La Jolla, CA 92093, USA. 2 Work partly done during a visit to the Institute for Advanced Study. 3 Current address: Avaya Labs, Basking Ridge, NJ 07920, USA. 0022-314X/03/$ see front matter r 2003 Elsevier Science (USA). All rights reserved. doi:10.1016/S0022-314X(03)00015-5 congruence conditions. Finally, we discuss asymptotic properties of the set of curvatures obtained as the packing is recursively constructed from a root quadruple. r 2003 Elsevier Science (USA). All rights reserved.", "title": "" }, { "docid": "neg:1840304_16", "text": "Diagnosis Related Group (DRG) upcoding is an anomaly in healthcare data that costs hundreds of millions of dollars in many developed countries. DRG upcoding is typically detected through resource intensive auditing. As supervised modeling of DRG upcoding is severely constrained by scope and timeliness of past audit data, we propose in this paper an unsupervised algorithm to filter data for potential identification of DRG upcoding. The algorithm has been applied to a hip replacement/revision dataset and a heart-attack dataset. The results are consistent with the assumptions held by domain experts.", "title": "" }, { "docid": "neg:1840304_17", "text": "This paper studies efficient means in dealing with intracategory diversity in object detection. Strategies for occlusion and orientation handling are explored by learning an ensemble of detection models from visual and geometrical clusters of object instances. An AdaBoost detection scheme is employed with pixel lookup features for fast detection. The analysis provides insight into the design of a robust vehicle detection system, showing promise in terms of detection performance and orientation estimation accuracy.", "title": "" }, { "docid": "neg:1840304_18", "text": "The need for fine-grained power management in digital ICs has led to the design and implementation of compact, scalable low-drop out regulators (LDOs) embedded deep within logic blocks. While analog LDOs have traditionally been used in digital ICs, the need for digitally implementable LDOs embedded in digital functional units for ultrafine grained power management is paramount. This paper presents a fully-digital, phase locked LDO implemented in 32 nm CMOS. The control model of the proposed design has been provided and limits of stability have been shown. Measurement results with a resistive load as well as a digital load exhibit peak current efficiency of 98%.", "title": "" }, { "docid": "neg:1840304_19", "text": "Conservation and sustainable management of wetlands requires participation of local stakeholders, including communities. The Bigodi Wetland is unusual because it is situated in a common property landscape but the local community has been running a successful community-based natural resource management programme (CBNRM) for the wetland for over a decade. Whilst external visitors to the wetland provide ecotourism revenues we sought to quantify community benefits through the use of wetland goods such as firewood, plant fibres, and the like, and costs associated with wild animals damaging farming activities. We interviewed 68 households living close to the wetland and valued their cash and non-cash incomes from farming and collection of non-timber forest products (NTFPs) and water. The majority of households collected a wide variety of plant and fish resources and water from the wetland for household use and livestock. Overall, 53% of total household cash and non-cash income was from collected products, mostly the wetland, 28% from arable agriculture, 12% from livestock and 7% from employment and cash transfers. Female-headed households had lower incomes than male-headed ones, and with a greater reliance on NTFPs. Annual losses due to wildlife damage were estimated at 4.2% of total gross income. Most respondents felt that the wetland was important for their livelihoods, with more than 80% identifying health, education, craft materials and firewood as key benefits. Ninety-five percent felt that the wetland was in a good condition and that most residents observed the agreed CBNRM rules regarding use of the wetland. This study confirms the success of the locally run CBNRM processes underlying the significant role that the wetland plays in local livelihoods.", "title": "" } ]
1840305
CoBoLD — A bonding mechanism for modular self-reconfigurable mobile robots
[ { "docid": "pos:1840305_0", "text": "One of the primary impediments to building ensembles of modular robots is the complexity and number of mechanical mechanisms used to construct the individual modules. As part of the Claytronics project - which aims to build very large ensembles of modular robots - we investigate how to simplify each module by eliminating moving parts and reducing the number of mechanical mechanisms on each robot by using force-at-a-distance actuators. Additionally, we are also investigating the feasibility of using these unary actuators to improve docking performance, implement intermodule adhesion, power transfer, communication, and sensing. In this paper we describe our most recent results in the magnetic domain, including our first design sufficiently robust to operate reliably in groups greater than two modules. Our work should be seen as an extension of systems such as Fracta [9], and a contrasting line of inquiry to several other researchers' prior efforts that have used magnetic latching to attach modules to one another but relied upon a powered hinge [10] or telescoping mechanism [12] within each module to facilitate self-reconfiguration.", "title": "" }, { "docid": "pos:1840305_1", "text": "Robots used for tasks in space have strict requirements. Modular reconfigurable robots have a variety of attributes that are advantageous for these conditions including the ability to serve as many tools at once saving weight, packing into compressed forms saving space and having large redundancy to increase robustness. Self-reconfigurable systems can also self-repair as well as automatically adapt to changing conditions or ones that were not anticipated. PolyBot may serve well in the space manipulation and surface mobility class of space applications.", "title": "" } ]
[ { "docid": "neg:1840305_0", "text": "Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.", "title": "" }, { "docid": "neg:1840305_1", "text": "Affective image understanding has been extensively studied in the last decade since more and more users express emotion via visual contents. While current algorithms based on convolutional neural networks aim to distinguish emotional categories in a discrete label space, the task is inherently ambiguous. This is mainly because emotional labels with the same polarity (i.e., positive or negative) are highly related, which is different from concrete object concepts such as cat, dog and bird. To the best of our knowledge, few methods focus on leveraging such characteristic of emotions for affective image understanding. In this work, we address the problem of understanding affective images via deep metric learning and propose a multi-task deep framework to optimize both retrieval and classification goals. We propose the sentiment constraints adapted from the triplet constraints, which are able to explore the hierarchical relation of emotion labels. We further exploit the sentiment vector as an effective representation to distinguish affective images utilizing the texture representation derived from convolutional layers. Extensive evaluations on four widely-used affective datasets, i.e., Flickr and Instagram, IAPSa, Art Photo, and Abstract Paintings, demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both affective image retrieval and classification tasks.", "title": "" }, { "docid": "neg:1840305_2", "text": "Providing accurate information about human's state, activity is one of the most important elements in Ubiquitous Computing. Various applications can be enabled if one's state, activity can be recognized. Due to the low deployment cost, non-intrusive sensing nature, Wi-Fi based activity recognition has become a promising, emerging research area. In this paper, we survey the state-of-the-art of the area from four aspects ranging from historical overview, theories, models, key techniques to applications. In addition to the summary about the principles, achievements of existing work, we also highlight some open issues, research directions in this emerging area.", "title": "" }, { "docid": "neg:1840305_3", "text": "Several results appeared that show significant reduction in time for matrix multiplication, singular value decomposition as well as linear (lscr2) regression, all based on data dependent random sampling. Our key idea is that low dimensional embeddings can be used to eliminate data dependence and provide more versatile, linear time pass efficient matrix computation. Our main contribution is summarized as follows. 1) Independent of the results of Har-Peled and of Deshpande and Vempala, one of the first - and to the best of our knowledge the most efficient - relative error (1 + epsi) parA $AkparF approximation algorithms for the singular value decomposition of an m times n matrix A with M non-zero entries that requires 2 passes over the data and runs in time O((M(k/epsi+k log k) + (n+m)(k/epsi+k log k)2)log (1/sigma)). 2) The first o(nd2) time (1 + epsi) relative error approximation algorithm for n times d linear (lscr2) regression. 3) A matrix multiplication and norm approximation algorithm that easily applies to implicitly given matrices and can be used as a black box probability boosting tool", "title": "" }, { "docid": "neg:1840305_4", "text": "Smart cities are struggling with using public space efficiently and decreasing pollution at the same time. For this governments have embraced smart parking initiatives, which should result in a high utilization of public space and minimization of the driving, in this way reducing the emissions of cars. Yet, simply opening data about the availability of public spaces results in more congestions as multiple cars might be heading for the same parking space. In this work, we propose a Multiple Criteria based Parking space Reservation (MCPR) algorithm, for reserving a space for a user to deal with parking space in a fair way. Users' requirements are the main driving factor for the algorithm and used as criteria in MCPR. To evaluate the algorithm, simulations for three set of user preferences were made. The simulation results show that the algorithm satisfied the users' request fairly for all the three preferences. The algorithm helps users automatically to find a parking space according to the users' requirements. The algorithm can be used in a smart parking system to search for a parking space on behalf of user and send parking space information to the user.", "title": "" }, { "docid": "neg:1840305_5", "text": "An assessment of Herman and Chomsky’s 1988 five-filter propaganda model suggests it is mainly valuable for identifying areas in which researchers should look for evidence of collaboration (whether intentional or otherwise) between mainstream media and the propaganda aims of the ruling establishment. The model does not identify methodologies for determining the relative weight of independent filters in different contexts, something that would be useful in its future development. There is a lack of precision in the characterization of some of the filters. The model privileges the structural factors that determine propagandized news selection, and therefore eschews or marginalizes intentionality. This paper extends the model to include the “buying out” of journalists or their publications by intelligence and related special interest organizations. It applies the extended six-filter model to controversies over reporting by The New York Times of the build-up towards the US invasion of Iraq in 2003, the issue of weapons of mass destruction in general, and the reporting of The New York Times correspondent Judith Miller in particular, in the context of broader critiques of US mainstream media war coverage. The controversies helped elicit evidence of the operation of some filters of the propaganda model, including dependence on official sources, fear of flak, and ideological convergence. The paper finds that the filter of routine news operations needs to be counterbalanced by its opposite, namely non-routine abuses of standard operating procedures. While evidence of the operation of other filters was weaker, this is likely due to difficulties of observability, as there are powerful deductive reasons for maintaining all six filters within the framework of media propaganda analysis.", "title": "" }, { "docid": "neg:1840305_6", "text": "While humor has been historically studied from a psychological, cognitive and linguistic standpoint, its study from a computational perspective is an area yet to be explored in Computational Linguistics. There exist some previous works, but a characterization of humor that allows its automatic recognition and generation is far from being specified. In this work we build a crowdsourced corpus of labeled tweets, annotated according to its humor value, letting the annotators subjectively decide which are humorous. A humor classifier for Spanish tweets is assembled based on supervised learning, reaching a precision of 84% and a recall of 69%.", "title": "" }, { "docid": "neg:1840305_7", "text": "Empirical, hypothesis-driven, experimentation is at the heart of the scientific discovery process and has become commonplace in human-factors related fields. To enable the integration of visual analytics in such experiments, we introduce VEEVVIE, the Visual Explorer for Empirical Visualization, VR and Interaction Experiments. VEEVVIE is comprised of a back-end ontology which can model several experimental designs encountered in these fields. This formalization allows VEEVVIE to capture experimental data in a query-able form and makes it accessible through a front-end interface. This front-end offers several multi-dimensional visualization widgets with built-in filtering and highlighting functionality. VEEVVIE is also expandable to support custom experimental measurements and data types through a plug-in visualization widget architecture. We demonstrate VEEVVIE through several case studies of visual analysis, performed on the design and data collected during an experiment on the scalability of high-resolution, immersive, tiled-display walls.", "title": "" }, { "docid": "neg:1840305_8", "text": "Storage systems rely on maintenance tasks, such as backup and layout optimization, to ensure data availability and good performance. These tasks access large amounts of data and can significantly impact foreground applications. We argue that storage maintenance can be performed more efficiently by prioritizing processing of data that is currently cached in memory. Data can be cached either due to other maintenance tasks requesting it previously, or due to overlapping foreground I/O activity.\n We present Duet, a framework that provides notifications about page-level events to maintenance tasks, such as a page being added or modified in memory. Tasks use these events as hints to opportunistically process cached data. We show that tasks using Duet can complete maintenance work more efficiently because they perform fewer I/O operations. The I/O reduction depends on the amount of data overlap with other maintenance tasks and foreground applications. Consequently, Duet's efficiency increases with additional tasks because opportunities for synergy appear more often.", "title": "" }, { "docid": "neg:1840305_9", "text": "Financial and capital markets (especially stock markets) are considered high return investment fields, which in the same time are dominated by uncertainty and volatility. Stock market prediction tries to reduce this uncertainty and consequently the risk. As stock markets are influenced by many economical, political and even psychological factors, it is very difficult to forecast the movement of future values. Since classical statistical methods (primarily technical and fundamental analysis) are unable to deal with the non-linearity in the dataset, thus it became necessary the utilization of more advanced forecasting procedures. Financial prediction is a research active area and neural networks have been proposed as one of the most promising methods for such predictions. Artificial Neural Networks (ANNs) mimics, simulates the learning capability of the human brain. NNs are able to find accurate solutions in a complex, noisy environment or even to deal efficiently with partial information. In the last decade the ANNs have been widely used for predicting financial markets, because they are capable to detect and reproduce linear and nonlinear relationships among a set of variables. Furthermore they have a potential of learning the underlying mechanics of stock markets, i.e. to capture the complex dynamics and non-linearity of the stock market time series. In this paper, study we will get acquainted with some financial time series analysis concepts and theories linked to stock markets, as well as with the neural networks based systems and hybrid techniques that were used to solve several forecasting problems concerning the capital, financial and stock markets. Putting the foregoing experimental results to use, we will develop, implement a multilayer feedforward neural network based financial time series forecasting system. Thus, this system will be used to predict the future index values of major US and European stock exchanges and the evolution of interest rates as well as the future stock price of some US mammoth companies (primarily from IT branch).", "title": "" }, { "docid": "neg:1840305_10", "text": "A planar antenna with a broadband feeding structure is presented and analyzed for ultrawideband applications. The proposed antenna consists of a suspended radiator fed by an n-shape microstrip feed. Study shows that this antenna achieves an impedance bandwidth from 3.1-5.1 GHz (48%) for a reflection of coefficient of iotaS11iota < -10 dB, and an average gain of 7.7 dBi. Stable boresight radiation patterns are achieved across the entire operating frequency band, by suppressing the high order mode resonances. This design exhibits good mechanical tolerance and manufacturability.", "title": "" }, { "docid": "neg:1840305_11", "text": "A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on. The current approaches to examining this issue require significant human supervision, such as visual inspection of sampled images, and often offer only fairly limited scalability. In this paper, we propose new techniques that employ a classification–based perspective to evaluate synthetic GAN distributions and their capability to accurately reflect the essential properties of the training data. These techniques require only minimal human supervision and can easily be scaled and adapted to evaluate a variety of state-of-the-art GANs on large, popular datasets. Our analysis indicates that GANs have significant problems in reproducing the more distributional properties of the training dataset. In particular, when seen through the lens of classification, the diversity of GAN data is orders of magnitude less than that of the original data.", "title": "" }, { "docid": "neg:1840305_12", "text": "The demands on dielectric material measurements have increased over the years as electrical components have been miniaturized and device frequency bands have increased. Well-characterized dielectric measurements on thin materials are needed for circuit design, minimization of crosstalk, and characterization of signal-propagation speed. Bulk material applications have also increased. For accurate dielectric measurements, each measurement band and material geometry requires specific fixtures. Engineers and researchers must carefully match their material system and uncertainty requirements to the best available measurement system. Broadband measurements require transmission-line methods, and accurate measurements on low-loss materials are performed in resonators. The development of the most accurate methods for each application requires accurate fixture selection in terms of field geometry, accurate field models, and precise measurement apparatus.", "title": "" }, { "docid": "neg:1840305_13", "text": "This paper presents a 40 Gb/s serial-link receiver including an adaptive equalizer and a CDR circuit. A parallel-path equalizing filter is used to compensate the high-frequency loss in copper cables. The adaptation is performed by only varying the gain in the high-pass path, which allows a single loop for proper control and completely removes the RC filters used for separately extracting the high- and low-frequency contents of the signal. A full-rate bang-bang phase detector with only five latches is proposed in the following CDR circuit. Minimizing the number of latches saves the power consumption and the area occupied by inductors. The performance is also improved by avoiding complicated routing of high-frequency signals. The receiver is able to recover 40 Gb/s data passing through a 4 m cable with 10 dB loss at 20 GHz. For an input PRBS of 2 7-1, the recovered clock jitter is 0.3 psrms and 4.3 pspp. The retimed data exhibits 500 mV pp output swing and 9.6 pspp jitter with BER <10-12. Fabricated in 90 nm CMOS technology, the receiver consumes 115 mW , of which 58 mW is dissipated in the equalizer and 57 mW in the CDR.", "title": "" }, { "docid": "neg:1840305_14", "text": "In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. In this work, we study unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery. We introduce a weight structure that is necessary for asymptotic convergence to the true sparse signal. With this structure, unfolded ISTA can attain a linear convergence, which is better than the sublinear convergence of ISTA/FISTA in general cases. Furthermore, we propose to incorporate thresholding in the network to perform support selection, which is easy to implement and able to boost the convergence rate both theoretically and empirically. Extensive simulations, including sparse vector recovery and a compressive sensing experiment on real image data, corroborate our theoretical results and demonstrate their practical usefulness. We have made our codes publicly available.2.", "title": "" }, { "docid": "neg:1840305_15", "text": "This paper presents a model which generates architectural layout for a single flat having regular shaped spaces; Bedroom, Bathroom, Kitchen, Balcony, Living and Dining Room. Using constraints at two levels; Topological (Adjacency, Compactness, Vaastu, and Open and Closed face constraints) and Dimensional (Length to Width ratio constraint), Genetic Algorithms have been used to generate the topological arrangement of spaces in the layout and further if required, feasibility have been dimensionally analyzed. Further easy evacuation form the selected layout in case of adversity has been proposed using Dijkstra's Algorithm. Later the proposed model has been tested for efficiency using various test cases. This paper also presents a classification and categorization of various problems of space planning.", "title": "" }, { "docid": "neg:1840305_16", "text": "Learning word embeddings on large unlabeled corpus has been shown to be successful in improving many natural language tasks. The most efficient and popular approaches learn or retrofit such representations using additional external data. Resulting embeddings are generally better than their corpus-only counterparts, although such resources cover a fraction of words in the vocabulary. In this paper, we propose a new approach, Dict2vec, based on one of the largest yet refined datasource for describing words – natural language dictionaries. Dict2vec builds new word pairs from dictionary entries so that semantically-related words are moved closer, and negative sampling filters out pairs whose words are unrelated in dictionaries. We evaluate the word representations obtained using Dict2vec on eleven datasets for the word similarity task and on four datasets for a text classification task.", "title": "" }, { "docid": "neg:1840305_17", "text": "This paper presents a high-level hand feature extraction method for real-time gesture recognition. Firstly, the fingers are modelled as cylindrical objects due to their parallel edge feature. Then a novel algorithm is proposed to directly extract fingers from salient hand edges. Considering the hand geometrical characteristics, the hand posture is segmented and described based on the finger positions, palm center location and wrist position. A weighted radial projection algorithm with the origin at the wrist position is applied to localize each finger. The developed system can not only extract extensional fingers but also flexional fingers with high accuracy. Furthermore, hand rotation and finger angle variation have no effect on the algorithm performance. The orientation of the gesture can be calculated without the aid of arm direction and it would not be disturbed by the bare arm area. Experiments have been performed to demonstrate that the proposed method can directly extract high-level hand feature and estimate hand poses in real-time. & 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "neg:1840305_18", "text": "Recent work has demonstrated the value of social media monitoring for health surveillance (e.g., tracking influenza or depression rates). It is an open question whether such data can be used to make causal inferences (e.g., determining which activities lead to increased depression rates). Even in traditional, restricted domains, estimating causal effects from observational data is highly susceptible to confounding bias. In this work, we estimate the effect of exercise on mental health from Twitter, relying on statistical matching methods to reduce confounding bias. We train a text classifier to estimate the volume of a user’s tweets expressing anxiety, depression, or anger, then compare two groups: those who exercise regularly (identified by their use of physical activity trackers like Nike+), and a matched control group. We find that those who exercise regularly have significantly fewer tweets expressing depression or anxiety; there is no significant difference in rates of tweets expressing anger. We additionally perform a sensitivity analysis to investigate how the many experimental design choices in such a study impact the final conclusions, including the quality of the classifier and the construction of the control group.", "title": "" }, { "docid": "neg:1840305_19", "text": "F2FS is a Linux file system designed to perform well on modern flash storage devices. The file system builds on append-only logging and its key design decisions were made with the characteristics of flash storage in mind. This paper describes the main design ideas, data structures, algorithms and the resulting performance of F2FS. Experimental results highlight the desirable performance of F2FS; on a state-of-the-art mobile system, it outperforms EXT4 under synthetic workloads by up to 3.1× (iozone) and 2× (SQLite). It reduces elapsed time of several realistic workloads by up to 40%. On a server system, F2FS is shown to perform better than EXT4 by up to 2.5× (SATA SSD) and 1.8× (PCIe SSD).", "title": "" } ]
1840306
The relationship between social network usage and some personality traits
[ { "docid": "pos:1840306_0", "text": "The explosion in social networking sites such as MySpace, Facebook, Bebo and Friendster is widely regarded as an exciting opportunity, especially for youth. Yet the public response tends to be one of puzzled dismay regarding, supposedly, a generation with many friends but little sense of privacy and a narcissistic fascination with self-display. This article explores teenagers” practices of social networking in order to uncover the subtle connections between online opportunity and risk. While younger teenagers relish the opportunities to continuously recreate a highly decorated, stylistically elaborate identity, older teenagers favour a plain aesthetic that foregrounds their links to others, thus expressing a notion of identity lived through authentic relationships. The article further contrasts teenagers” graded conception of “friends” with the binary 1 Published as Livingstone, S. (2008) Taking risky opportunities in youthful content creation: teenagers’ use of social networking sites for intimacy, privacy and self-expression. New Media & Society, 10(3): 393-411. Available in Sage Journal Online (Sage Publications Ltd. – All rights reserved): http://nms.sagepub.com/content/10/3/393.abstract 2 Thanks to the Research Council of Norway for funding the Mediatized Stories: Mediation Perspectives On Digital Storytelling Among Youth of which this project is part. I also thank David Brake, Shenja van der Graaf, Angela Jones, Ellen Helsper, Maria Kyriakidou, Annie Mullins, Toshie Takahashi, and two anonymous reviewers for their comments on an earlier version of this article. Last, thanks to the teenagers who participated in this project. 3 Sonia Livingstone is Professor of Social Psychology in the Department of Media and Communications at the London School of Economics and Political Science. She is author or editor of ten books and 100+ academic articles and chapters in the fields of media audiences, children and the internet, domestic contexts of media use and media literacy. Recent books include Young People and New Media (Sage, 2002), The Handbook of New Media (edited, with Leah Lievrouw, Sage, 2006), and Public Connection? Media Consumption and the Presumption of Attention (with Nick Couldry and Tim Markham, Palgrave, 2007). She currently directs the thematic research network, EU Kids Online, for the EC’s Safer Internet Plus programme. Email s.livingstone@lse.ac.uk", "title": "" }, { "docid": "pos:1840306_1", "text": "Children and adolescents now communicate online to form and/or maintain relationships with friends, family, and strangers. Relationships in \"real life\" are important for children's and adolescents' psychosocial development; however, they can be difficult for those who experience feelings of loneliness and/or social anxiety. The aim of this study was to investigate differences in usage of online communication patterns between children and adolescents with and without self-reported loneliness and social anxiety. Six hundred twenty-six students ages 10 to 16 years completed a survey on the amount of time they spent communicating online, the topics they discussed, the partners they engaged with, and their purposes for communicating over the Internet. Participants were administered a shortened version of the UCLA Loneliness Scale and an abbreviated subscale of the Social Anxiety Scale for Adolescents (SAS-A). Additionally, age and gender differences in usage of the online communication patterns were examined across the entire sample. Findings revealed that children and adolescents who self-reported being lonely communicated online significantly more frequently about personal and intimate topics than did those who did not self-report being lonely. The former were motivated to use online communication significantly more frequently to compensate for their weaker social skills to meet new people. Results suggest that Internet usage allows them to fulfill critical needs of social interactions, self-disclosure, and identity exploration. Future research, however, should explore whether or not the benefits derived from online communication may also facilitate lonely children's and adolescents' offline social relationships.", "title": "" } ]
[ { "docid": "neg:1840306_0", "text": "Airway pressure limitation is now a largely accepted strategy in adult respiratory distress syndrome (ARDS) patients; however, some debate persists about the exact level of plateau pressure which can be safely used. The objective of the present study was to examine if the echocardiographic evaluation of right ventricular function performed in ARDS may help to answer to this question. For more than 20 years, we have regularly monitored right ventricular function by echocardiography in ARDS patients, during two different periods, a first (1980–1992) where airway pressure was not limited, and a second (1993–2006) where airway pressure was limited. By pooling our data, we can observe the effect of a large range of plateau pressure upon mortality rate and incidence of acute cor pulmonale. In this whole group of 352 ARDS patients, mortality rate and incidence of cor pulmonale were 80 and 56%, respectively, when plateau pressure was > 35 cmH2O; 42 and 32%, respectively, when plateau pressure was between 27 and 35 cmH2O; and 30 and 13%, respectively, when plateau pressure was < 27 cmH2O. Moreover, a clear interaction between plateau pressure and cor pulmonale was evidenced: whereas the odd ratio of dying for an increase in plateau pressure from 18–26 to 27–35 cm H2O in patients without cor pulmonale was 1.05 (p = 0.635), it was 3.32 in patients with cor pulmonale (p < 0.034). We hypothesize that monitoring of right ventricular function by echocardiography at bedside might help to control the safety of plateau pressure used in ARDS.", "title": "" }, { "docid": "neg:1840306_1", "text": "The composition of fatty acids in the diets of both human and domestic animal species can regulate inflammation through the biosynthesis of potent lipid mediators. The substrates for lipid mediator biosynthesis are derived primarily from membrane phospholipids and reflect dietary fatty acid intake. Inflammation can be exacerbated with intake of certain dietary fatty acids, such as some ω-6 polyunsaturated fatty acids (PUFA), and subsequent incorporation into membrane phospholipids. Inflammation, however, can be resolved with ingestion of other fatty acids, such as ω-3 PUFA. The influence of dietary PUFA on phospholipid composition is influenced by factors that control phospholipid biosynthesis within cellular membranes, such as preferential incorporation of some fatty acids, competition between newly ingested PUFA and fatty acids released from stores such as adipose, and the impacts of carbohydrate metabolism and physiological state. The objective of this review is to explain these factors as potential obstacles to manipulating PUFA composition of tissue phospholipids by specific dietary fatty acids. A better understanding of the factors that influence how dietary fatty acids can be incorporated into phospholipids may lead to nutritional intervention strategies that optimize health.", "title": "" }, { "docid": "neg:1840306_2", "text": "The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance’s features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box.", "title": "" }, { "docid": "neg:1840306_3", "text": "A novel type of dual-mode microstrip bandpass filter using degenerate modes of a meander loop resonator has been developed for miniaturization of high selectivity narrowband microwave bandpass filters. A filter of this type having a 2.5% bandwidth at 1.58 GHz was designed and fabricated. The measured filter performance is presented.", "title": "" }, { "docid": "neg:1840306_4", "text": "A low-power consumption, small-size smart antenna, named electronically steerable parasitic array radiator (ESPAR), has been designed. Beamforming is achieved by tuning the load reactances at parasitic elements surrounding the active central element. A fast beamforming algorithm based on simultaneous perturbation stochastic approximation with a maximum cross correlation coefficient criterion is proposed. The simulation and experimental results validate the algorithm. In an environment where the signal-to-interference-ratio is 0 dB, the algorithm converges within 50 iterations and achieves an output signal-to-interference-plus-noise-ratio of 10 dB. With the fast beamforming ability and its low-power consumption attribute, the ESPAR antenna makes the mass deployment of smart antenna technologies practical.", "title": "" }, { "docid": "neg:1840306_5", "text": "Social entrepreneurship has raised increasing interest among scholars, yet we still know relatively little about the particular dynamics and processes involved. This paper aims at contributing to the field of social entrepreneurship by clarifying key elements, providing working definitions, and illuminating the social entrepreneurship process. In the first part of the paper we review the existing literature. In the second part we develop a model on how intentions to create a social venture –the tangible outcome of social entrepreneurship– get formed. Combining insights from traditional entrepreneurship literature and anecdotal evidence in the field of social entrepreneurship, we propose that behavioral intentions to create a social venture are influenced, first, by perceived social venture desirability, which is affected by attitudes such as empathy and moral judgment, and second, by perceived social venture feasibility, which is facilitated by social support and self-efficacy beliefs.", "title": "" }, { "docid": "neg:1840306_6", "text": "Entailment recognition is a primary generic task in natural language inference, whose focus is to detect whether the meaning of one expression can be inferred from the meaning of the other. Accordingly, many NLP applications would benefit from high coverage knowledgebases of paraphrases and entailment rules. To this end, learning such knowledgebases from the Web is especially appealing due to its huge size as well as its highly heterogeneous content, allowing for a more scalable rule extraction of various domains. However, the scalability of state-of-the-art entailment rule acquisition approaches from the Web is still limited. We present a fully unsupervised learning algorithm for Webbased extraction of entailment relations. We focus on increased scalability and generality with respect to prior work, with the potential of a large-scale Web-based knowledgebase. Our algorithm takes as its input a lexical–syntactic template and searches the Web for syntactic templates that participate in an entailment relation with the input template. Experiments show promising results, achieving performance similar to a state-of-the-art unsupervised algorithm, operating over an offline corpus, but with the benefit of learning rules for different domains with no additional effort.", "title": "" }, { "docid": "neg:1840306_7", "text": "Poly(vinyl alcohol) cryogel, PVA-C, is presented as a tissue-mimicking material, suitable for application in magnetic resonance (MR) imaging and ultrasound imaging. A 10% by weight poly(vinyl alcohol) in water solution was used to form PVA-C, which is solidified through a freeze-thaw process. The number of freeze-thaw cycles affects the properties of the material. The ultrasound and MR imaging characteristics were investigated using cylindrical samples of PVA-C. The speed of sound was found to range from 1520 to 1540 m s(-1), and the attenuation coefficients were in the range of 0.075-0.28 dB (cm MHz)(-1). T1 and T2 relaxation values were found to be 718-1034 ms and 108-175 ms, respectively. We also present applications of this material in an anthropomorphic brain phantom, a multi-volume stenosed vessel phantom and breast biopsy phantoms. Some suggestions are made for how best to handle this material in the phantom design and development process.", "title": "" }, { "docid": "neg:1840306_8", "text": "In this paper, we show how an open-source, language-independent proofreading tool has been built. Many languages lack contextual proofreading tools; for many others, only partial solutions are available. Using existing, largely language-independent tools and collaborative processes it is possible to develop a practical style and grammar checker and to fight the digital divide in countries where commercial linguistic application software is unavailable or too expensive for average users. The described solution depends on relatively easily available language resources and does not require a fully formalized grammar nor a deep parser, yet it can detect many frequent context-dependent spelling mistakes, as well as grammatical, punctuation, usage, and stylistic errors. Copyright q 2010 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "neg:1840306_9", "text": "The mission of the IPTS is to provide customer-driven support to the EU policy-making process by researching science-based responses to policy challenges that have both a socioeconomic and a scientific or technological dimension. Legal Notice Neither the European Commission nor any person acting on behalf of the Commission is responsible for the use which might be made of this publication. (*) Certain mobile telephone operators do not allow access to 00 800 numbers or these calls may be billed.", "title": "" }, { "docid": "neg:1840306_10", "text": "This paper discusses verification and validation of simulation models. The different approaches to deciding model validity am presented; how model verification and validation relate to the model development process are discussed; various validation techniques are defined, conceptual model validity, model verification, operational validity, and data validity are described; ways to document results are given; and a recommended procedure is presented.", "title": "" }, { "docid": "neg:1840306_11", "text": "Policy languages (such as privacy and rights) have had little impact on the wider community. Now that Social Networks have taken off, the need to revisit Policy languages and realign them towards Social Networks requirements has become more apparent. One such language is explored as to its applicability to the Social Networks masses. We also argue that policy languages alone are not sufficient and thus they should be paired with reasoning mechanisms to provide precise and unambiguous execution models of the policies. To this end we propose a computationally oriented model to represent, reason with and execute policies for Social Networks.", "title": "" }, { "docid": "neg:1840306_12", "text": "We describe a method for automatically transcribing guitar tablatures from audio signals in accordance with the player's proficiency for use as support for a guitar player's practice. The system estimates the multiple pitches in each time frame and the optimal fingering considering playability and player's proficiency. It combines a conventional multipitch estimation method with a basic dynamic programming method. The difficulty of the fingerings can be changed by tuning the parameter representing the relative weights of the acoustical reproducibility and the fingering easiness. Experiments conducted using synthesized guitar audio signals to evaluate the transcribed tablatures in terms of the multipitch estimation accuracy and fingering easiness demonstrated that the system can simplify the fingering with higher precision of multipitch estimation results than the conventional method.", "title": "" }, { "docid": "neg:1840306_13", "text": "The rapid advance of mobile computing technology and wireless networking, there is a significant increase of mobile subscriptions. This drives a strong demand for mobile cloud applications and services for mobile device users. This brings out a great business and research opportunity in mobile cloud computing (MCC). This paper first discusses the market trend and related business driving forces and opportunities. Then it presents an overview of MCC in terms of its concepts, distinct features, research scope and motivations, as well as advantages and benefits. Moreover, it discusses its opportunities, issues and challenges. Furthermore, the paper highlights a research roadmap for MCC.", "title": "" }, { "docid": "neg:1840306_14", "text": "Within the past two years, important advances have been made in modeling credit risk at the portfolio level. Practitioners and policy makers have invested in implementing and exploring a variety of new models individually. Less progress has been made, however, with comparative analyses. Direct comparison often is not straightforward, because the different models may be presented within rather different mathematical frameworks. This paper offers a comparative anatomy of two especially influential benchmarks for credit risk models, J.P. Morgan’s CreditMetrics and Credit Suisse Financial Product’s CreditRisk+. We show that, despite differences on the surface, the underlying mathematical structures are similar. The structural parallels provide intuition for the relationship between the two models and allow us to describe quite precisely where the models differ in functional form, distributional assumptions, and reliance on approximation formulae. We then design simulation exercises which evaluate the effect of each of these differences individually. JEL Codes: G31, C15, G11 ∗The views expressed herein are my own and do not necessarily reflect those of the Board of Governors or its staff. I would like to thank David Jones for drawing my attention to this issue, and for his helpful comments. I am also grateful to Mark Carey for data and advice useful in calibration of the models, and to Chris Finger and Tom Wilde for helpful comments. Please address correspondence to the author at Division of Research and Statistics, Mail Stop 153, Federal Reserve Board, Washington, DC 20551, USA. Phone: (202)452-3705. Fax: (202)452-5295. Email: 〈mgordy@frb.gov〉. Over the past decade, financial institutions have developed and implemented a variety of sophisticated models of value-at-risk for market risk in trading portfolios. These models have gained acceptance not only among senior bank managers, but also in amendments to the international bank regulatory framework. Much more recently, important advances have been made in modeling credit risk in lending portfolios. The new models are designed to quantify credit risk on a portfolio basis, and thus have application in control of risk concentration, evaluation of return on capital at the customer level, and more active management of credit portfolios. Future generations of today’s models may one day become the foundation for measurement of regulatory capital adequacy. Two of the models, J.P. Morgan’s CreditMetrics and Credit Suisse Financial Product’s CreditRisk+, have been released freely to the public since 1997 and have quickly become influential benchmarks. Practitioners and policy makers have invested in implementing and exploring each of the models individually, but have made less progress with comparative analyses. The two models are intended to measure the same risks, but impose different restrictions and distributional assumptions, and suggest different techniques for calibration and solution. Thus, given the same portfolio of credit exposures, the two models will, in general, yield differing evaluations of credit risk. Determining which features of the models account for differences in output would allow us a better understanding of the sensitivity of the models to the particular assumptions they employ. Unfortunately, direct comparison of the models is not straightforward, because the two models are presented within rather different mathematical frameworks. The CreditMetrics model is familiar to econometricians as an ordered probit model. Credit events are driven by movements in underlying unobserved latent variables. The latent variables are assumed to depend on external “risk factors.” Common dependence on the same risk factors gives rise to correlations in credit events across obligors. The CreditRisk+ model is based instead on insurance industry models of event risk. Instead of a latent variable, each obligor has a default probability. The default probabilities are not constant over time, but rather increase or decrease in response to background macroeconomic factors. To the extent that two obligors are sensitive to the same set of background factors, their default probabilities will move together. These co-movements in probability give rise to correlations in defaults. CreditMetrics and CreditRisk+ may serve essentially the same function, but they appear to be constructed quite differently. This paper offers a comparative anatomy of CreditMetrics and CreditRisk+. We show that, despite differences on the surface, the underlying mathematical structures are similar. The structural parallels provide intuition for the relationship between the two models and allow us to describe quite precisely where the models differ in functional form, distributional assumptions, and reliance on approximation formulae. We can then design simulation exercises which evaluate the effect of these differences individually. We proceed as follows. Section 1 presents a summary of the CreditRisk+ model, and introduces a restricted version of CreditMetrics. The restrictions are imposed to facilitate direct comparison of CreditMetrics and CreditRisk+. While some of the richness of the full CreditMetrics implementation is sacrificed, the essential mathematical characteristics of the model are preserved. Our", "title": "" }, { "docid": "neg:1840306_15", "text": "Municipal solid waste is a major challenge facing developing countries [1]. Amount of waste generated by developing countries is increasing as a result of urbanisation and economic growth [2]. In Africa and other developing countries waste is disposed of in poorly managed landfills, controlled and uncontrolled dumpsites increasing environmental health risks [3]. Households have a major role to play in reducing the amount of waste sent to landfills [4]. Recycling is accepted by developing and developed countries as one of the best solution in municipal solid waste management [5]. Households influence the quality and amount of recyclable material recovery [1]. Separation of waste at source can reduce contamination of recyclable waste material. Households are the key role players in ensuring that waste is separated at source and their willingness to participate in source separation of waste should be encouraged by municipalities and local regulatory authorities [6,7].", "title": "" }, { "docid": "neg:1840306_16", "text": "We have collected a new face data set that will facilitate research in the problem of frontal to profile face verification `in the wild'. The aim of this data set is to isolate the factor of pose variation in terms of extreme poses like profile, where many features are occluded, along with other `in the wild' variations. We call this data set the Celebrities in Frontal-Profile (CFP) data set. We find that human performance on Frontal-Profile verification in this data set is only slightly worse (94.57% accuracy) than that on Frontal-Frontal verification (96.24% accuracy). However we evaluated many state-of-the-art algorithms, including Fisher Vector, Sub-SML and a Deep learning algorithm. We observe that all of them degrade more than 10% from Frontal-Frontal to Frontal-Profile verification. The Deep learning implementation, which performs comparable to humans on Frontal-Frontal, performs significantly worse (84.91% accuracy) on Frontal-Profile. This suggests that there is a gap between human performance and automatic face recognition methods for large pose variation in unconstrained images.", "title": "" }, { "docid": "neg:1840306_17", "text": "In grid connected photovoltaic (PV) systems, maximum power point tracking (MPPT) algorithm plays an important role in optimizing the solar energy efficiency. In this paper, the new artificial neural network (ANN) based MPPT method has been proposed for searching maximum power point (MPP) fast and exactly. For the first time, the combined method is proposed, which is established on the ANN-based PV model method and incremental conductance (IncCond) method. The advantage of ANN-based PV model method is the fast MPP approximation base on the ability of ANN according the parameters of PV array that used. The advantage of IncCond method is the ability to search the exactly MPP based on the feedback voltage and current but don't care the characteristic on PV array‥ The effectiveness of proposed algorithm is validated by simulation using Matlab/ Simulink and experimental results using kit field programmable gate array (FPGA) Virtex II pro of Xilinx.", "title": "" }, { "docid": "neg:1840306_18", "text": "This paper addresses the problem of detecting the presence of malware that leaveperiodictraces innetworktraffic. This characteristic behavior of malware was found to be surprisingly prevalent in a parallel study. To this end, we propose a visual analytics solution that supports both automatic detection and manual inspection of periodic signals hidden in network traffic. The detected periodic signals are visually verified in an overview using a circular graph and two stacked histograms as well as in detail using deep packet inspection. Our approach offers the capability to detect complex periodic patterns, but avoids the unverifiability issue often encountered in related work. The periodicity assumption imposed on malware behavior is a relatively weak assumption, but initial evaluations with a simulated scenario as well as a publicly available network capture demonstrate its applicability.", "title": "" }, { "docid": "neg:1840306_19", "text": "This article presents near-optimal guarantees for stable and robust image recovery from undersampled noisy measurements using total variation minimization. In particular, we show that from O(s log(N)) nonadaptive linear measurements, an image can be reconstructed to within the best s-term approximation of its gradient up to a logarithmic factor, and this factor can be removed by taking slightly more measurements. Along the way, we prove a strengthened Sobolev inequality for functions lying in the null space of suitably incoherent matrices.", "title": "" } ]
1840307
Contactless payment systems based on RFID technology
[ { "docid": "pos:1840307_0", "text": "This paper surveys recent technical research on the problems of privacy and security for radio frequency identification (RFID). RFID tags are small, wireless devices that help identify objects and people. Thanks to dropping cost, they are likely to proliferate into the billions in the next several years-and eventually into the trillions. RFID tags track objects in supply chains, and are working their way into the pockets, belongings, and even the bodies of consumers. This survey examines approaches proposed by scientists for privacy protection and integrity assurance in RFID systems, and treats the social and technical context of their work. While geared toward the nonspecialist, the survey may also serve as a reference for specialist readers.", "title": "" } ]
[ { "docid": "neg:1840307_0", "text": "We have trained a deep (convolutional) neural network to predict the ground-state energy of an electron in four classes of confining two-dimensional electrostatic potentials. On randomly generated potentials, for which there is no analytic form for either the potential or the ground-state energy, the neural network model was able to predict the ground-state energy to within chemical accuracy, with a median absolute error of 1.49 mHa. We also investigate the performance of the model in predicting other quantities such as the kinetic energy and the first excited-state energy of random potentials. While we demonstrated this approach on a simple, tractable problem, the transferability and excellent performance of the resulting model suggests further applications of deep neural networks to problems of electronic structure.", "title": "" }, { "docid": "neg:1840307_1", "text": "In this study we represent malware as opcode sequences and detect it using a deep belief network (DBN). Compared with traditional shallow neural networks, DBNs can use unlabeled data to pretrain a multi-layer generative model, which can better represent the characteristics of data samples. We compare the performance of DBNs with that of three baseline malware detection models, which use support vector machines, decision trees, and the k-nearest neighbor algorithm as classifiers. The experiments demonstrate that the DBN model provides more accurate detection than the baseline models. When additional unlabeled data are used for DBN pretraining, the DBNs perform better than the other detection models. We also use the DBNs as an autoencoder to extract the feature vectors of executables. The experiments indicate that the autoencoder can effectively model the underlying structure of input data and significantly reduce the dimensions of feature vectors.", "title": "" }, { "docid": "neg:1840307_2", "text": "Despite the recent trend of increasingly large datasets for object detection, there still exist many classes with few training examples. To overcome this lack of training data for certain classes, we propose a novel way of augmenting the training data for each class by borrowing and transforming examples from other classes. Our model learns which training instances from other classes to borrow and how to transform the borrowed examples so that they become more similar to instances from the target class. Our experimental results demonstrate that our new object detector, with borrowed and transformed examples, improves upon the current state-of-the-art detector on the challenging SUN09 object detection dataset. Thesis Supervisor: Antonio Torralba Title: Associate Professor of Electrical Engineering and Computer Science", "title": "" }, { "docid": "neg:1840307_3", "text": "BACKGROUND\nCore stability training has grown in popularity over 25 years, initially for back pain prevention or therapy. Subsequently, it developed as a mode of exercise training for health, fitness and sport. The scientific basis for traditional core stability exercise has recently been questioned and challenged, especially in relation to dynamic athletic performance. Reviews have called for clarity on what constitutes anatomy and function of the core, especially in healthy and uninjured people. Clinical research suggests that traditional core stability training is inappropriate for development of fitness for heath and sports performance. However, commonly used methods of measuring core stability in research do not reflect functional nature of core stability in uninjured, healthy and athletic populations. Recent reviews have proposed a more dynamic, whole body approach to training core stabilization, and research has begun to measure and report efficacy of these modes training. The purpose of this study was to assess extent to which these developments have informed people currently working and participating in sport.\n\n\nMETHODS\nAn online survey questionnaire was developed around common themes on core stability training as defined in the current scientific literature and circulated to a sample population of people working and participating in sport. Survey results were assessed against key elements of the current scientific debate.\n\n\nRESULTS\nPerceptions on anatomy and function of the core were gathered from a representative cohort of athletes, coaches, sports science and sports medicine practitioners (n = 241), along with their views on effectiveness of various current and traditional exercise training modes. Most popular method of testing and measuring core function was subjective assessment through observation (43%), while a quarter (22%) believed there was no effective method of measurement. Perceptions of people in sport reflect the scientific debate, and practitioners have adopted a more functional approach to core stability training. There was strong support for loaded, compound exercises performed upright, compared to moderate support for traditional core stability exercises. Half of the participants (50%) in the survey, however, still support a traditional isolation core stability training.\n\n\nCONCLUSION\nPerceptions in applied practice on core stability training for dynamic athletic performance are aligned to a large extent to the scientific literature.", "title": "" }, { "docid": "neg:1840307_4", "text": "In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3% increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4%, 1.0% accuracy loss under 2× speedup respectively, which is significant.", "title": "" }, { "docid": "neg:1840307_5", "text": "Success in natural language inference (NLI) should require a model to understand both lexical and compositional semantics. However, through adversarial evaluation, we find that several state-of-the-art models with diverse architectures are over-relying on the former and fail to use the latter. Further, this compositionality unawareness is not reflected via standard evaluation on current datasets. We show that removing RNNs in existing models or shuffling input words during training does not induce large performance loss despite the explicit removal of compositional information. Therefore, we propose a compositionality-sensitivity testing setup that analyzes models on natural examples from existing datasets that cannot be solved via lexical features alone (i.e., on which a bag-of-words model gives a high probability to one wrong label), hence revealing the models’ actual compositionality awareness. We show that this setup not only highlights the limited compositional ability of current NLI models, but also differentiates model performance based on design, e.g., separating shallow bag-of-words models from deeper, linguistically-grounded tree-based models. Our evaluation setup is an important analysis tool: complementing currently existing adversarial and linguistically driven diagnostic evaluations, and exposing opportunities for future work on evaluating models’ compositional understanding.", "title": "" }, { "docid": "neg:1840307_6", "text": "We present a logical formalism for expressing properties of continuous time Markov chains. The semantics for such properties arise as a natural extension of previous work on discrete time Markov chains to continuous time. The major result is that the veriication problem is decidable; this is shown using results in algebraic and transcendental number theory.", "title": "" }, { "docid": "neg:1840307_7", "text": "Photovoltaic (PV) system performance is influenced by several factors, including irradiance, temperature, shading, degradation, mismatch losses, soiling, etc. Shading of a PV array, in particular, either complete or partial, can have a significant impact on its power output and energy yield, depending on array configuration, shading pattern, and the bypass diodes incorporated in the PV modules. In this paper, the effect of partial shading on multicrystalline silicon (mc-Si) PV modules is investigated. A PV module simulation model implemented in P-Spice is first employed to quantify the effect of partial shading on the I-V curve and the maximum power point (MPP) voltage and power. Then, generalized formulae are derived, which permit accurate enough evaluation of MPP voltage and power of mc-Si PV modules, without the need to resort to detailed modeling and simulation. The equations derived are validated via experimental results.", "title": "" }, { "docid": "neg:1840307_8", "text": "Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.", "title": "" }, { "docid": "neg:1840307_9", "text": "Industrial systems, e.g., wind turbines, generate big amounts of data from reliable sensors with high velocity. As it is unfeasible to store and query such big amounts of data, only simple aggregates are currently stored. However, aggregates remove fluctuations and outliers that can reveal underlying problems and limit the knowledge to be gained from historical data. As a remedy, we present the distributed Time Series Management System (TSMS) ModelarDB that uses models to store sensor data. We thus propose an online, adaptive multi-model compression algorithm that maintains data values within a user-defined error bound (possibly zero). We also propose (i) a database schema to store time series as models, (ii) methods to push-down predicates to a key-value store utilizing this schema, (iii) optimized methods to execute aggregate queries on models, (iv) a method to optimize execution of projections through static code-generation, and (v) dynamic extensibility that allows new models to be used without recompiling the TSMS. Further, we present a general modular distributed TSMS architecture and its implementation, ModelarDB, as a portable library, using Apache Spark for query processing and Apache Cassandra for storage. An experimental evaluation shows that, unlike current systems, ModelarDB hits a sweet spot and offers fast ingestion, good compression, and fast, scalable online aggregate query processing at the same time. This is achieved by dynamically adapting to data sets using multiple models. The system degrades gracefully as more outliers occur and the actual errors are much lower than the bounds. PVLDB Reference Format: Søren Kejser Jensen, Torben Bach Pedersen, Christian Thomsen. ModelarDB: Modular Model-Based Time Series Management with Spark and Cassandra. PVLDB, 11(11): 1688-1701, 2018. DOI: https://doi.org/10.14778/3236187.3236215", "title": "" }, { "docid": "neg:1840307_10", "text": "To discover patterns in historical data, climate scientists have applied various clustering methods with the goal of identifying regions that share some common climatological behavior. However, past approaches are limited by the fact that they either consider only a single time period (snapshot) of multivariate data, or they consider only a single variable by using the time series data as multi-dimensional feature vector. In both cases, potentially useful information may be lost. Moreover, clusters in high-dimensional data space can be difficult to interpret, prompting the need for a more effective data representation. We address both of these issues by employing a complex network (graph) to represent climate data, a more intuitive model that can be used for analysis while also having a direct mapping to the physical world for interpretation. A cross correlation function is used to weight network edges, thus respecting the temporal nature of the data, and a community detection algorithm identifies multivariate clusters. Examining networks for consecutive periods allows us to study structural changes over time. We show that communities have a climatological interpretation and that disturbances in structure can be an indicator of climate events (or lack thereof). Finally, we discuss how this model can be applied for the discovery of more complex concepts such as unknown teleconnections or the development of multivariate climate indices and predictive insights.", "title": "" }, { "docid": "neg:1840307_11", "text": "The long jump has been widely studied in recent years. Two models exist in the literature which define the relationship between selected variables that affect performance. Both models suggest that the critical phase of the long jump event is the touch-down to take-off phase, as it is in this phase that the necessary vertical velocity is generated. Many three dimensional studies of the long jump exist, but the only studies to have reported detailed data on this phase were two-dimensional in nature. In these, the poor relationships obtained between key variables and performance led to the suggestion that there may be some relevant information in data in the third dimension. The aims of this study were to conduct a three-dimensional analysis of the touch-down to take-off phase in the long jump and to explore the interrelationships between key variables. Fourteen male long jumpers were filmed using three-dimensional methods during the finals of the 1994 (n = 8) and 1995 (n = 6) UK National Championships. Various key variables for the long jump were used in a series of correlational and multiple regression analyses. The relationships between key variables when correlated directly one-to-one were generally poor. However, when analysed using a multiple regression approach, a series of variables was identified which supported the general principles outlined in the two models. These variables could be interpreted in terms of speed, technique and strength. We concluded that in the long jump, variables that are important to performance are interdependent and can only be identified by using appropriate statistical techniques. This has implications for a better understanding of the long jump event and it is likely that this finding can be generalized to other technical sports skills.", "title": "" }, { "docid": "neg:1840307_12", "text": "We evaluated the use of gamification to facilitate a student-centered learning environment within an undergraduate Year 2 Personal and Professional Development (PPD) course. In addition to face-to-face classroom practices, an information technology-based gamified system with a range of online learning activities was presented to students as support material. The implementation of the gamified course lasted two academic terms. The subsequent evaluation from a cohort of 136 students indicated that student performance was significantly higher among those who participated in the gamified system than in those who engaged with the nongamified, traditional delivery, while behavioral engagement in online learning activities was positively related to course performance, after controlling for gender, attendance, and Year 1 PPD performance. Two interesting phenomena appeared when we examined the influence of student background: female students participated significantly more in online learning activities than male students, and students with jobs engaged significantly more in online learning activities than students without jobs. The gamified course design advocated in this work may have significant implications for educators who wish to develop engaging technology-mediated learning environments that enhance students’ learning, or for a broader base of professionals who wish to engage a population of potential users, such as managers engaging employees or marketers engaging customers.", "title": "" }, { "docid": "neg:1840307_13", "text": "Geospatial object detection from high spatial resolution (HSR) remote sensing imagery is a heated and challenging problem in the field of automatic image interpretation. Despite convolutional neural networks (CNNs) having facilitated the development in this domain, the computation efficiency under real-time application and the accurate positioning on relatively small objects in HSR images are two noticeable obstacles which have largely restricted the performance of detection methods. To tackle the above issues, we first introduce semantic segmentation-aware CNN features to activate the detection feature maps from the lowest level layer. In conjunction with this segmentation branch, another module which consists of several global activation blocks is proposed to enrich the semantic information of feature maps from higher level layers. Then, these two parts are integrated and deployed into the original single shot detection framework. Finally, we use the modified multi-scale feature maps with enriched semantics and multi-task training strategy to achieve end-to-end detection with high efficiency. Extensive experiments and comprehensive evaluations on a publicly available 10-class object detection dataset have demonstrated the superiority of the presented method.", "title": "" }, { "docid": "neg:1840307_14", "text": "This paper proposes a pseudo random number generator using Elman neural network. The proposed neural network is a recurrent neural network able to generate pseudo-random numbers from the weight matrices obtained from the layer weights of the Elman network. The proposed method is not computationally demanding and is easy to implement for varying bit sequences. The random numbers generated using our method have been subjected to frequency test and ENT test program. The results show that recurrent neural networks can be used as a pseudo random number generator(prng).", "title": "" }, { "docid": "neg:1840307_15", "text": "In the domain of Internet of Things (IoT), applications are modeled to understand and react based on existing contextual and situational parameters. This work implements a management flow for the abstraction of real world objects and virtual composition of those objects to provide IoT services. We also present a real world knowledge model that aggregates constraints defining a situation, which is then used to detect and anticipate future potential situations. It is implemented based on reasoning and machine learning mechanisms. This work showcases a prototype implementation of the architectural framework in a smart home scenario, targeting two functionalities: actuation and automation based on the imposed constraints and thereby responding to situations and also adapting to the user preferences. It thus provides a productive integration of heterogeneous devices, IoT platforms, and cognitive technologies to improve the services provided to the user.", "title": "" }, { "docid": "neg:1840307_16", "text": "The review focuses on one growing dimension of health care globalisation - medical tourism, whereby consumers elect to travel across borders or to overseas destinations to receive their treatment. Such treatments include cosmetic and dental surgery; cardio, orthopaedic and bariatric surgery; IVF treatment; and organ and tissue transplantation. The review sought to identify the medical tourist literature for out-of-pocket payments, focusing wherever possible on evidence and experience pertaining to patients in mid-life and beyond. Despite increasing media interest and coverage hard empirical findings pertaining to out-of-pocket medical tourism are rare. Despite a number of countries offering relatively low cost treatments we know very little about many of the numbers and key indicators on medical tourism. The narrative review traverses discussion on medical tourist markets, consumer choice, clinical outcomes, quality and safety, and ethical and legal dimensions. The narrative review draws attention to gaps in research evidence and strengthens the call for more empirical research on the role, process and outcomes of medical tourism. In concluding it makes suggestion for the content of such a strategy.", "title": "" }, { "docid": "neg:1840307_17", "text": "This thesis addresses total variation (TV) image restoration and blind image deconvolution. Classical image processing problems, such as deblurring, call for some kind of regularization. Total variation is among the state-of-the-art regularizers, as it provides a good balance between the ability to describe piecewise smooth images and the complexity of the resulting algorithms. In this thesis, we propose a minimization algorithm for TV-based image restoration that belongs to the majorization-minimization class (MM). The proposed algorithm is similar to the known iterative re-weighted least squares (IRSL) approach, although it constitutes an original interpretation of this method from the MM perspective. The problem of choosing the regularization parameter is also addressed in this thesis. A new Bayesian method is introduced to automatically estimate the parameter, by assigning it a non-informative prior, followed by integration based on an approximation of the associated partition function. The proposed minimization problem, also addressed using the MM framework, results on an update rule for the regularization parameter, and can be used with any TV-based image deblurring algorithm. Blind image deconvolution is the third topic of this thesis. We consider the case of linear motion blurs. We propose a new discretization of the motion blur kernel, and a new estimation algorithm to recover the motion blur parameters (orientation and length) from blurred natural images, based on the Radon transform of the spectrum of the blurred images.", "title": "" }, { "docid": "neg:1840307_18", "text": "Recently, tuple-stores have become pivotal structures in many information systems. Their ability to handle large datasets makes them important in an era with unprecedented amounts of data being produced and exchanged. However, these tuple-stores typically rely on structured peer-to-peer protocols which assume moderately stable environments. Such assumption does not always hold for very large scale systems sized in the scale of thousands of machines. In this paper we present a novel approach to the design of a tuple-store. Our approach follows a stratified design based on an unstructured substrate. We focus on this substrate and how the use of epidemic protocols allow reaching high dependability and scalability.", "title": "" } ]
1840308
A Bayesian approach to covariance estimation and data fusion
[ { "docid": "pos:1840308_0", "text": "In this paper, we introduce three novel distributed Kalman filtering (DKF) algorithms for sensor networks. The first algorithm is a modification of a previous DKF algorithm presented by the author in CDC-ECC '05. The previous algorithm was only applicable to sensors with identical observation matrices which meant the process had to be observable by every sensor. The modified DKF algorithm uses two identical consensus filters for fusion of the sensor data and covariance information and is applicable to sensor networks with different observation matrices. This enables the sensor network to act as a collective observer for the processes occurring in an environment. Then, we introduce a continuous-time distributed Kalman filter that uses local aggregation of the sensor data but attempts to reach a consensus on estimates with other nodes in the network. This peer-to-peer distributed estimation method gives rise to two iterative distributed Kalman filtering algorithms with different consensus strategies on estimates. Communication complexity and packet-loss issues are discussed. The performance and effectiveness of these distributed Kalman filtering algorithms are compared and demonstrated on a target tracking task.", "title": "" }, { "docid": "pos:1840308_1", "text": "The problem of distributed Kalman filtering (DKF) for sensor networks is one of the most fundamental distributed estimation problems for scalable sensor fusion. This paper addresses the DKF problem by reducing it to two separate dynamic consensus problems in terms of weighted measurements and inverse-covariance matrices. These to data fusion problems are solved is a distributed way using low-pass and band-pass consensus filters. Consensus filters are distributed algorithms that allow calculation of average-consensus of time-varying signals. The stability properties of consensus filters is discussed in a companion CDC ’05 paper [24]. We show that a central Kalman filter for sensor networks can be decomposed into n micro-Kalman filters with inputs that are provided by two types of consensus filters. This network of micro-Kalman filters collectively are capable to provide an estimate of the state of the process (under observation) that is identical to the estimate obtained by a central Kalman filter given that all nodes agree on two central sums. Later, we demonstrate that our consensus filters can approximate these sums and that gives an approximate distributed Kalman filtering algorithm. A detailed account of the computational and communication architecture of the algorithm is provided. Simulation results are presented for a sensor network with 200 nodes and more than 1000 links.", "title": "" } ]
[ { "docid": "neg:1840308_0", "text": "In order to provide a material that can facilitate the modeling and construction of a Furuta pendulum, this paper presents the deduction, step-by-step, of a Furuta pendulum mathematical model by using the Lagrange equations of motion. Later, a mechanical design of the Furuta pendulum is carried out via the software Solid Works and subsequently a prototype is built. Numerical simulations of the Furuta pendulum model are performed via Mat lab-Simulink. Furthermore, the Furuta pendulum prototype built is experimentally tested by using Mat lab-Simulink, Control Desk, and a DS1104 board from dSPACE.", "title": "" }, { "docid": "neg:1840308_1", "text": "Digital deformities continue to be a common ailment among many patients who present to foot and ankle specialists. When conservative treatment fails to eliminate patient complaints, surgical correction remains a viable treatment option. Proximal interphalangeal joint arthrodesis remains the standard procedure among most foot and ankle surgeons. With continued advances in fixation technology and techniques, surgeons continue to have better options for the achievement of excellent digital surgery outcomes. This article reviews current trends in fixation of digital deformities while highlighting pertinent aspects of the physical examination, radiographic examination, and surgical technique.", "title": "" }, { "docid": "neg:1840308_2", "text": "Interaction graphs are ubiquitous in many fields such as bioinformatics, sociology and physical sciences. There have been many studies in the literature targeted at studying and mining these graphs. However, almost all of them have studied these graphs from a static point of view. The study of the evolution of these graphs over time can provide tremendous insight on the behavior of entities, communities and the flow of information among them. In this work, we present an event-based characterization of critical behavioral patterns for temporally varying interaction graphs. We use non-overlapping snapshots of interaction graphs and develop a framework for capturing and identifying interesting events from them. We use these events to characterize complex behavioral patterns of individuals and communities over time. We demonstrate the application of behavioral patterns for the purposes of modeling evolution, link prediction and influence maximization. Finally, we present a diffusion model for evolving networks, based on our framework.", "title": "" }, { "docid": "neg:1840308_3", "text": "We study 3D shape modeling from a single image and make contributions to it in three aspects. First, we present Pix3D, a large-scale benchmark of diverse image-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications in shape-related tasks including reconstruction, retrieval, viewpoint estimation, etc. Building such a large-scale dataset, however, is highly challenging; existing datasets either contain only synthetic data, or lack precise alignment between 2D images and 3D shapes, or only have a small number of images. Second, we calibrate the evaluation criteria for 3D shape reconstruction through behavioral studies, and use them to objectively and systematically benchmark cutting-edge reconstruction algorithms on Pix3D. Third, we design a novel model that simultaneously performs 3D reconstruction and pose estimation; our multi-task learning approach achieves state-of-the-art performance on both tasks.", "title": "" }, { "docid": "neg:1840308_4", "text": "The task of selecting project portfolios is an important and recurring activity in many organizations. There are many techniques available to assist in this process, but no integrated framework for carrying it out. This paper simpli®es the project portfolio selection process by developing a framework which separates the work into distinct stages. Each stage accomplishes a particular objective and creates inputs to the next stage. At the same time, users are free to choose the techniques they ®nd the most suitable for each stage, or in some cases to omit or modify a stage if this will simplify and expedite the process. The framework may be implemented in the form of a decision support system, and a prototype system is described which supports many of the related decision making activities. # 1999 Published by Elsevier Science Ltd and IPMA. All rights reserved", "title": "" }, { "docid": "neg:1840308_5", "text": "This paper surveys and investigates the strengths and weaknesses of a number of recent approaches to advanced workflow modelling. Rather than inventing just another workflow language, we briefly describe recent workflow languages, and we analyse them with respect to their support for advanced workflow topics. Object Coordination Nets, Workflow Graphs, WorkFlow Nets, and an approach based on Workflow Evolution are described as dedicated workflow modelling approaches. In addition, the Unified Modelling Language as the de facto standard in objectoriented modelling is also investigated. These approaches are discussed with respect to coverage of workflow perspectives and support for flexibility and analysis issues in workflow management, which are today seen as two major areas for advanced workflow support. Given the different goals and backgrounds of the approaches mentioned, it is not surprising that each approach has its specific strengths and weaknesses. We clearly identify these strengths and weaknesses, and we conclude with ideas for combining their best features.", "title": "" }, { "docid": "neg:1840308_6", "text": "The queries issued to search engines are often ambiguous or multifaceted, which requires search engines to return diverse results that can fulfill as many different information needs as possible; this is called search result diversification. Recently, the relational learning to rank model, which designs a learnable ranking function following the criterion of maximal marginal relevance, has shown effectiveness in search result diversification [Zhu et al. 2014]. The goodness of a diverse ranking model is usually evaluated with diversity evaluation measures such as α-NDCG [Clarke et al. 2008], ERR-IA [Chapelle et al. 2009], and D#-NDCG [Sakai and Song 2011]. Ideally the learning algorithm would train a ranking model that could directly optimize the diversity evaluation measures with respect to the training data. Existing relational learning to rank algorithms, however, only train the ranking models by optimizing loss functions that loosely relate to the evaluation measures. To deal with the problem, we propose a general framework for learning relational ranking models via directly optimizing any diversity evaluation measure. In learning, the loss function upper-bounding the basic loss function defined on a diverse ranking measure is minimized. We can derive new diverse ranking algorithms under the framework, and several diverse ranking algorithms are created based on different upper bounds over the basic loss function. We conducted comparisons between the proposed algorithms with conventional diverse ranking methods using the TREC benchmark datasets. Experimental results show that the algorithms derived under the diverse learning to rank framework always significantly outperform the state-of-the-art baselines.", "title": "" }, { "docid": "neg:1840308_7", "text": "Research on interoperability of technology-enhanced learning (TEL) repositories throughout the last decade has led to a fragmented landscape of competing approaches, such as metadata schemas and interface mechanisms. However, so far Web-scale integration of resources is not facilitated, mainly due to the lack of take-up of shared principles, datasets and schemas. On the other hand, the Linked Data approach has emerged as the de-facto standard for sharing data on the Web and offers a large potential to solve interoperability issues in the field of TEL. In this paper, we describe a general approach to exploit the wealth of already existing TEL data on the Web by allowing its exposure as Linked Data and by taking into account automated enrichment and interlinking techniques to provide rich and well-interlinked data for the educational domain. This approach has been implemented in the context of the mEducator project where data from a number of open TEL data repositories has been integrated, exposed and enriched by following Linked Data principles.", "title": "" }, { "docid": "neg:1840308_8", "text": "Given an edge-weighted graph G and two distinct vertices s and t of G, the next-to-shortest path problem asks for a path from s to t of minimum length among all paths from s to t except the shortest ones. In this article, we consider the version where G is directed and all edge weights are positive. Some properties of the requested path are derived when G is an arbitrary digraph. In addition, if G is planar, an O(n3)-time algorithm is proposed, where n is the number of vertices of G. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 000(00), 000–00", "title": "" }, { "docid": "neg:1840308_9", "text": "Modern cryptocurrency systems, such as Ethereum, permit complex financial transactions through scripts called smart contracts. These smart contracts are executed many, many times, always without real concurrency. First, all smart contracts are serially executed by miners before appending them to the blockchain. Later, those contracts are serially re-executed by validators to verify that the smart contracts were executed correctly by miners. Serial execution limits system throughput and fails to exploit today's concurrent multicore and cluster architectures. Nevertheless, serial execution appears to be required: contracts share state, and contract programming languages have a serial semantics.\n This paper presents a novel way to permit miners and validators to execute smart contracts in parallel, based on techniques adapted from software transactional memory. Miners execute smart contracts speculatively in parallel, allowing non-conflicting contracts to proceed concurrently, and \"discovering\" a serializable concurrent schedule for a block's transactions, This schedule is captured and encoded as a deterministic fork-join program used by validators to re-execute the miner's parallel schedule deterministically but concurrently.\n Smart contract benchmarks run on a JVM with ScalaSTM show that a speedup of 1.33x can be obtained for miners and 1.69x for validators with just three concurrent threads.", "title": "" }, { "docid": "neg:1840308_10", "text": "Six studies investigate whether and how distant future time perspective facilitates abstract thinking and impedes concrete thinking by altering the level at which mental representations are construed. In Experiments 1-3, participants who envisioned their lives and imagined themselves engaging in a task 1 year later as opposed to the next day subsequently performed better on a series of insight tasks. In Experiments 4 and 5 a distal perspective was found to improve creative generation of abstract solutions. Moreover, Experiment 5 demonstrated a similar effect with temporal distance manipulated indirectly, by making participants imagine their lives in general a year from now versus tomorrow prior to performance. In Experiment 6, distant time perspective undermined rather than enhanced analytical problem solving.", "title": "" }, { "docid": "neg:1840308_11", "text": "We address the problem of detecting duplicate questions in forums, which is an important step towards automating the process of answering new questions. As finding and annotating such potential duplicates manually is very tedious and costly, automatic methods based on machine learning are a viable alternative. However, many forums do not have annotated data, i.e., questions labeled by experts as duplicates, and thus a promising solution is to use domain adaptation from another forum that has such annotations. Here we focus on adversarial domain adaptation, deriving important findings about when it performs well and what properties of the domains are important in this regard. Our experiments with StackExchange data show an average improvement of 5.6% over the best baseline across multiple pairs of domains.", "title": "" }, { "docid": "neg:1840308_12", "text": "Deep neural networks (DNNs) require very large amounts of computation both for training and for inference when deployed in the field. A common approach to implementing DNNs is to recast the most computationally expensive operations as general matrix multiplication (GEMM). However, as we demonstrate in this paper, there are a great many different ways to express DNN convolution operations using GEMM. Although different approaches all perform the same number of operations, the size of temorary data structures differs significantly. Convolution of an input matrix with dimensions C × H × W , requires O(KCHW ) additional space using the classical im2col approach. More recently memory-efficient approaches requiring just O(KCHW ) auxiliary space have been proposed. We present two novel GEMM-based algorithms that require just O(MHW ) and O(KW ) additional space respectively, where M is the number of channels in the result of the convolution. These algorithms dramatically reduce the space overhead of DNN convolution, making it much more suitable for memory-limited embedded systems. Experimental evaluation shows that our lowmemory algorithms are just as fast as the best patch-building approaches despite requiring just a fraction of the amount of additional memory. Our low-memory algorithms have excellent data locality which gives them a further edge over patch-building algorithms when multiple cores are used. As a result, our low memory algorithms often outperform the best patch-building algorithms using multiple threads.", "title": "" }, { "docid": "neg:1840308_13", "text": "PURPOSE\nThe Food Cravings Questionnaires are among the most often used measures for assessing the frequency and intensity of food craving experiences. However, there is a lack of studies that have examined specific cut-off scores that may indicate pathologically elevated levels of food cravings.\n\n\nMETHODS\nReceiver-Operating-Characteristic analysis was used to determine sensitivity and specificity of scores on the Food Cravings Questionnaire-Trait-reduced (FCQ-T-r) for discriminating between individuals with (n = 43) and without (n = 389) \"food addiction\" as assessed with the Yale Food Addiction Scale 2.0.\n\n\nRESULTS\nA cut-off score of 50 on the FCQ-T-r discriminated between individuals with and without \"food addiction\" with high sensitivity (85%) and specificity (93%).\n\n\nCONCLUSIONS\nFCQ-T-r scores of 50 and higher may indicate clinically relevant levels of trait food craving.\n\n\nLEVEL OF EVIDENCE\nLevel V, descriptive study.", "title": "" }, { "docid": "neg:1840308_14", "text": "Operational controls are designed to support the integration of wind and solar power within microgrids. An aggregated model of renewable wind and solar power generation forecast is proposed to support the quantification of the operational reserve for day-ahead and real-time scheduling. Then, a droop control for power electronic converters connected to battery storage is developed and tested. Compared with the existing droop controls, it is distinguished in that the droop curves are set as a function of the storage state-of-charge (SOC) and can become asymmetric. The adaptation of the slopes ensures that the power output supports the terminal voltage while at the same keeping the SOC within a target range of desired operational reserve. This is shown to maintain the equilibrium of the microgrid's real-time supply and demand. The controls are implemented for the special case of a dc microgrid that is vertically integrated within a high-rise host building of an urban area. Previously untapped wind and solar power are harvested on the roof and sides of a tower, thereby supporting delivery to electric vehicles on the ground. The microgrid vertically integrates with the host building without creating a large footprint.", "title": "" }, { "docid": "neg:1840308_15", "text": "Abs t rad -Th i s paper presents all controllers for the general ~'® control problem (with no assumptions on the plant matrices). Necessary and sufficient conditions for the existence of an ~® controller of any order are given in terms of three Linear Matrix Inequalities (LMIs). Our existence conditions are equivalent to Scherer's results, but with a more elementary derivation. Furthermore, we provide the set of all ~(= controllers explicitly parametrized in the state space using the positive definite solutions to the LMIs. Even under standard assumptions (full rank, etc.), our controller parametrization has an advantage over the Q-parametrization. The freedom Q (a real-rational stable transfer matrix with the ~® norm bounded above by a specified number) is replaced by a constant matrix L of fixed dimension with a norm bound, and the solutions (X, Y) to the LMIs. The inequality formulation converts the existence conditions to a convex feasibility problem, and also the free matrix L and the pair (X, Y) define a finite dimensional design space, as opposed to the infinite dimensional space associated with the Q-parametrization.", "title": "" }, { "docid": "neg:1840308_16", "text": "Given a query graph q and a data graph g, the subgraph isomorphism search finds all occurrences of q in g and is considered one of the most fundamental query types for many real applications. While this problem belongs to NP-hard, many algorithms have been proposed to solve it in a reasonable time for real datasets. However, a recent study has shown, through an extensive benchmark with various real datasets, that all existing algorithms have serious problems in their matching order selection. Furthermore, all algorithms blindly permutate all possible mappings for query vertices, often leading to useless computations. In this paper, we present an efficient and robust subgraph search solution, called TurboISO, which is turbo-charged with two novel concepts, candidate region exploration and the combine and permute strategy (in short, Comb/Perm). The candidate region exploration identifies on-the-fly candidate subgraphs (i.e, candidate regions), which contain embeddings, and computes a robust matching order for each candidate region explored. The Comb/Perm strategy exploits the novel concept of the neighborhood equivalence class (NEC). Each query vertex in the same NEC has identically matching data vertices. During subgraph isomorphism search, Comb/Perm generates only combinations for each NEC instead of permutating all possible enumerations. Thus, if a chosen combination is determined to not contribute to a complete solution, all possible permutations for that combination will be safely pruned. Extensive experiments with many real datasets show that TurboISO consistently and significantly outperforms all competitors by up to several orders of magnitude.", "title": "" }, { "docid": "neg:1840308_17", "text": "It is widely known that in wireless sensor networks (WSN), energy efficiency is of utmost importance. WSN need to be energy efficient but also need to provide better performance, particularly latency. A common protocol design guideline has been to trade off some performance metrics such as throughput and delay for energy. This paper presents a novel MAC (Express Energy Efficient Media Access Control) protocol that not only preserves the energy efficiency of current alternatives but also coordinates the transfer of packets from source to destination in such a way that latency and jitter are improved considerably. Our simulations show how EX-MAC (Express Energy Efficient MAC) outperforms the well-known S-MAC protocols in several performance metrics.", "title": "" }, { "docid": "neg:1840308_18", "text": "Datasets originating from social networks are very valuable to many fields such as sociology and psychology. However, the supports from technical perspective are far from enough, and specific approaches are urgently in need. This paper applies data mining to psychology area for detecting depressed users in social network services. Firstly, a sentiment analysis method is proposed utilizing vocabulary and man-made rules to calculate the depression inclination of each micro-blog. Secondly, a depression detection model is constructed based on the proposed method and 10 features of depressed users derived from psychological research. Then 180 users and 3 kinds of classifiers are used to verify the model, whose precisions are all around 80%. Also, the significance of each feature is analyzed. Lastly, an application is developed within the proposed model for mental health monitoring online. This study is supported by some psychologists, and facilitates them in data-centric aspect in turn.", "title": "" }, { "docid": "neg:1840308_19", "text": "This paper presents the design, implementation and validation of the three-wheel holonomic motion system of a mobile robot designed to operate in homes. The holonomic motion system is described in terms of mechanical design and electronic control. The paper analyzes the kinematics of the motion system and validates the estimation of the trajectory comparing the displacement estimated with the internal odometry of the motors and the displacement estimated with a SLAM procedure based on LIDAR information. Results obtained in different experiments have shown a difference on less than 30 mm between the position estimated with the SLAM and odometry, and a difference in the angular orientation of the mobile robot lower than 5° in absolute displacements up to 1000 mm.", "title": "" } ]
1840309
Battery management system in the Bayesian paradigm: Part I: SOC estimation
[ { "docid": "pos:1840309_0", "text": "This paper presents a method for modeling and estimation of the state of charge (SOC) of lithium-ion (Li-Ion) batteries using neural networks (NNs) and the extended Kalman filter (EKF). The NN is trained offline using the data collected from the battery-charging process. This network finds the model needed in the state-space equations of the EKF, where the state variables are the battery terminal voltage at the previous sample and the SOC at the present sample. Furthermore, the covariance matrix for the process noise in the EKF is estimated adaptively. The proposed method is implemented on a Li-Ion battery to estimate online the actual SOC of the battery. Experimental results show a good estimation of the SOC and fast convergence of the EKF state variables.", "title": "" }, { "docid": "pos:1840309_1", "text": "Battery management systems in hybrid electric vehicle battery packs must estimate values descriptive of the pack’s present operating condition. These include: battery state of charge, power fade, capacity fade, and instantaneous available power. The estimation mechanism must adapt to changing cell characteristics as cells age and therefore provide accurate estimates over the lifetime of the pack. In a series of three papers, we propose a method, based on extended Kalman filtering (EKF), that is able to accomplish these goals on a lithium ion polymer battery pack. We expect that it will also work well on other battery chemistries. These papers cover the required mathematical background, cell modeling and system identification requirements, and the final solution, together with results. In order to use EKF to estimate the desired quantities, we first require a mathematical model that can accurately capture the dynamics of a cell. In this paper we “evolve” a suitable model from one that is very primitive to one that is more advanced and works well in practice. The final model includes terms that describe the dynamic contributions due to open-circuit voltage, ohmic loss, polarization time constants, electro-chemical hysteresis, and the effects of temperature. We also give a means, based on EKF, whereby the constant model parameters may be determined from cell test data. Results are presented that demonstrate it is possible to achieve root-mean-squared modeling error smaller than the level of quantization error expected in an implementation. © 2004 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "neg:1840309_0", "text": "This study examined how scaffolds and student achievement levels influence inquiry and performance in a problem-based learning environment. The scaffolds were embedded within a hypermedia program that placed students at the center of a problem in which they were trying to become the youngest person to fly around the world in a balloon. One-hundred and eleven seventh grade students enrolled in a science and technology course worked in collaborative groups for a duration of 3 weeks to complete a project that included designing a balloon and a travel plan. Student groups used one of three problem-based, hypermedia programs: (1) a no scaffolding condition that did not provide access to scaffolds, (2) a scaffolding optional condition that provided access to scaffolds, but gave students the choice of whether or not to use them, and (3) a scaffolding required condition required students to complete all available scaffolds. Results revealed that students in the scaffolding optional and scaffolding required conditions performed significantly better than students in the no scaffolding condition on one of the two components of the group project. Results also showed that student achievement levels were significantly related to individual posttest scores; higherachieving students scored better on the posttest than lower-achieving students. In addition, analyses of group notebooks confirmed qualitative differences between students in the various conditions. Specifically, those in the scaffolding required condition produced more highly organized project notebooks containing a higher percentage of entries directly relevant to the problem. These findings suggest that scaffolds may enhance inquiry and performance, especially when students are required to access and", "title": "" }, { "docid": "neg:1840309_1", "text": "Automatic detection and monitoring of oil spills and illegal oil discharges is of fundamental importance in ensuring compliance with marine legislation and protection of the coastal environments, which are under considerable threat from intentional or accidental oil spills, uncontrolled sewage and wastewater discharged. In this paper the level set based image segmentation was evaluated for the real-time detection and tracking of oil spills from SAR imagery. The developed processing scheme consists of a preprocessing step, in which an advanced image simplification is taking place, followed by a geometric level set segmentation for the detection of the possible oil spills. Finally a classification was performed, for the separation of lookalikes, leading to oil spill extraction. Experimental results demonstrate that the level set segmentation is a robust tool for the detection of possible oil spills, copes well with abrupt shape deformations and splits and outperforms earlier efforts which were based on different types of threshold or edge detection techniques. The developed algorithm’s efficiency for real-time oil spill detection and monitoring was also tested.", "title": "" }, { "docid": "neg:1840309_2", "text": "Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.", "title": "" }, { "docid": "neg:1840309_3", "text": "In this letter, a novel compact and broadband integrated transition between a laminated waveguide and an air-filled rectangular waveguide operating in Ka band is proposed. A three-pole filter equivalent circuit model is employed to interpret the working mechanism and to predict the performance of the transition. A back-to-back prototype of the proposed transition is designed and fabricated for proving the concept. Good agreement of the measured and simulated results is obtained. The measured result shows that the insertion loss of better than 0.26 dB from 34.8 to 37.8 GHz can be achieved.", "title": "" }, { "docid": "neg:1840309_4", "text": "Force.com is the preeminent on-demand application development platform in use today, supporting some 55,000+ organizations. Individual enterprises and commercial software-as-a-service (SaaS) vendors trust the platform to deliver robust, reliable, Internet-scale applications. To meet the extreme demands of its large user population, Force.com's foundation is a metadatadriven software architecture that enables multitenant applications.\n The focus of this paper is multitenancy, a fundamental design approach that can dramatically improve SaaS application management. This paper defines multitenancy, explains its benefits, and demonstrates why metadata-driven architectures are the premier choice for implementing multitenancy.", "title": "" }, { "docid": "neg:1840309_5", "text": "We investigate two extremal problems for polynomials giving upper bounds for spherical codes and for polynomials giving lower bounds for spherical designs, respectively. We consider two basic properties of the solutions of these problems. Namely, we estimate from below the number of double zeros and find zero Gegenbauer coefficients of extremal polynomials. Our results allow us to search effectively for such solutions using a computer. The best polynomials we have obtained give substantial improvements in some cases on the previously known bounds for spherical codes and designs. Some examples are given in Section 6.", "title": "" }, { "docid": "neg:1840309_6", "text": "This paper studies the minimum achievable source coding rate as a function of blocklength <i>n</i> and probability ϵ that the distortion exceeds a given level <i>d</i> . Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by <i>R</i>(<i>d</i>) + √<i>V</i>(<i>d</i>)/(<i>n</i>) <i>Q</i><sup>-1</sup>(ϵ), where <i>R</i>(<i>d</i>) is the rate-distortion function, <i>V</i>(<i>d</i>) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and <i>Q</i><sup>-1</sup>(·) is the inverse of the standard Gaussian complementary cumulative distribution function.", "title": "" }, { "docid": "neg:1840309_7", "text": "The class imbalance problem has been known to hinder the learning performance of classification algorithms. Various real-world classification tasks such as text categorization suffer from this phenomenon. We demonstrate that active learning is capable of solving the problem.", "title": "" }, { "docid": "neg:1840309_8", "text": "The development of interactive rehabilitation technologies which rely on wearable-sensing for upper body rehabilitation is attracting increasing research interest. This paper reviews related research with the aim: 1) To inventory and classify interactive wearable systems for movement and posture monitoring during upper body rehabilitation, regarding the sensing technology, system measurements and feedback conditions; 2) To gauge the wearability of the wearable systems; 3) To inventory the availability of clinical evidence supporting the effectiveness of related technologies. A systematic literature search was conducted in the following search engines: PubMed, ACM, Scopus and IEEE (January 2010–April 2016). Forty-five papers were included and discussed in a new cuboid taxonomy which consists of 3 dimensions: sensing technology, feedback modalities and system measurements. Wearable sensor systems were developed for persons in: 1) Neuro-rehabilitation: stroke (n = 21), spinal cord injury (n = 1), cerebral palsy (n = 2), Alzheimer (n = 1); 2) Musculoskeletal impairment: ligament rehabilitation (n = 1), arthritis (n = 1), frozen shoulder (n = 1), bones trauma (n = 1); 3) Others: chronic pulmonary obstructive disease (n = 1), chronic pain rehabilitation (n = 1) and other general rehabilitation (n = 14). Accelerometers and inertial measurement units (IMU) are the most frequently used technologies (84% of the papers). They are mostly used in multiple sensor configurations to measure upper limb kinematics and/or trunk posture. Sensors are placed mostly on the trunk, upper arm, the forearm, the wrist, and the finger. Typically sensors are attachable rather than embedded in wearable devices and garments; although studies that embed and integrate sensors are increasing in the last 4 years. 16 studies applied knowledge of result (KR) feedback, 14 studies applied knowledge of performance (KP) feedback and 15 studies applied both in various modalities. 16 studies have conducted their evaluation with patients and reported usability tests, while only three of them conducted clinical trials including one randomized clinical trial. This review has shown that wearable systems are used mostly for the monitoring and provision of feedback on posture and upper extremity movements in stroke rehabilitation. The results indicated that accelerometers and IMUs are the most frequently used sensors, in most cases attached to the body through ad hoc contraptions for the purpose of improving range of motion and movement performance during upper body rehabilitation. Systems featuring sensors embedded in wearable appliances or garments are only beginning to emerge. Similarly, clinical evaluations are scarce and are further needed to provide evidence on effectiveness and pave the path towards implementation in clinical settings.", "title": "" }, { "docid": "neg:1840309_9", "text": "Recently, convolutional neural networks (CNNs) have been used as a powerful tool to solve many problems of machine learning and computer vision. In this paper, we aim to provide insight on the property of convolutional neural networks, as well as a generic method to improve the performance of many CNN architectures. Specifically, we first examine existing CNN models and observe an intriguing property that the filters in the lower layers form pairs (i.e., filters with opposite phase). Inspired by our observation, we propose a novel, simple yet effective activation scheme called concatenated ReLU (CRelu) and theoretically analyze its reconstruction property in CNNs. We integrate CRelu into several state-of-the-art CNN architectures and demonstrate improvement in their recognition performance on CIFAR-10/100 and ImageNet datasets with fewer trainable parameters. Our results suggest that better understanding of the properties of CNNs can lead to significant performance improvement with a simple modification.", "title": "" }, { "docid": "neg:1840309_10", "text": "In this paper, we present the developmert and initial validation of a new self-report instrument, the Differentiation of Self Inventory (DSI). T. DSI represents the first attempt to create a multi-dimensional measure of differentiation based on Bowen Theory, focusing specifically on adults (ages 25 +), their current significant relationships, and their relations with families of origin. Principal components factor analysis on a sample of 313 normal adults (mean age = 36.8) suggested four dimensions: Emotional Reactivity, Reactive Distancing, Fusion with Parents, and \"I\" Position. Scales constructed from these factors were found to be moderately correlated in the expected direction, internally consistent, and significantly predictive of trait anxiety. The potential contribution of the DSI is discussed -for testing Bowen Theory, as a clinical assessment tool, and as an indicator of psychotherapeutic outcome.", "title": "" }, { "docid": "neg:1840309_11", "text": "We present a gold standard annotation of syntactic dependencies in the English Web Treebank corpus using the Stanford Dependencies standard. This resource addresses the lack of a gold standard dependency treebank for English, as well as the limited availability of gold standard syntactic annotations for informal genres of English text. We also present experiments on the use of this resource, both for training dependency parsers and for evaluating dependency parsers like the one included as part of the Stanford Parser. We show that training a dependency parser on a mix of newswire and web data improves performance on that type of data without greatly hurting performance on newswire text, and therefore gold standard annotations for non-canonical text can be valuable for parsing in general. Furthermore, the systematic annotation effort has informed both the SD formalism and its implementation in the Stanford Parser’s dependency converter. In response to the challenges encountered by annotators in the EWT corpus, we revised and extended the Stanford Dependencies standard, and improved the Stanford Parser’s dependency converter.", "title": "" }, { "docid": "neg:1840309_12", "text": "Dynamic Time Warping (DTW) is a distance measure that compares two time series after optimally aligning them. DTW is being used for decades in thousands of academic and industrial projects despite the very expensive computational complexity, O(n2). These applications include data mining, image processing, signal processing, robotics and computer graphics among many others. In spite of all this research effort, there are many myths and misunderstanding about DTW in the literature, for example \"it is too slow to be useful\" or \"the warping window size does not matter much.\" In this tutorial, we correct these misunderstandings and we summarize the research efforts in optimizing both the efficiency and effectiveness of both the basic DTW algorithm, and of the higher-level algorithms that exploit DTW such as similarity search, clustering and classification. We will discuss variants of DTW such as constrained DTW, multidimensional DTW and asynchronous DTW, and optimization techniques such as lower bounding, early abandoning, run-length encoding, bounded approximation and hardware optimization. We will discuss a multitude of application areas including physiological monitoring, social media mining, activity recognition and animal sound processing. The optimization techniques are generalizable to other domains on various data types and problems.", "title": "" }, { "docid": "neg:1840309_13", "text": "Research at the intersection of machine learning, programming languages, and software engineering has recently taken important steps in proposing learnable probabilistic models of source code that exploit the abundance of patterns of code. In this article, we survey this work. We contrast programming languages against natural languages and discuss how these similarities and differences drive the design of probabilistic models. We present a taxonomy based on the underlying design principles of each model and use it to navigate the literature. Then, we review how researchers have adapted these models to application areas and discuss cross-cutting and application-specific challenges and opportunities.", "title": "" }, { "docid": "neg:1840309_14", "text": "BACKGROUND\nMyocardial infarction (MI) can directly cause ischemic mitral regurgitation (IMR), which has been touted as an indicator of poor prognosis in acute and early phases after MI. However, in the chronic post-MI phase, prognostic implications of IMR presence and degree are poorly defined.\n\n\nMETHODS AND RESULTS\nWe analyzed 303 patients with previous (>16 days) Q-wave MI by ECG who underwent transthoracic echocardiography: 194 with IMR quantitatively assessed in routine practice and 109 without IMR matched for baseline age (71+/-11 versus 70+/-9 years, P=0.20), sex, and ejection fraction (EF, 33+/-14% versus 34+/-11%, P=0.14). In IMR patients, regurgitant volume (RVol) and effective regurgitant orifice (ERO) area were 36+/-24 mL/beat and 21+/-12 mm(2), respectively. After 5 years, total mortality and cardiac mortality for patients with IMR (62+/-5% and 50+/-6%, respectively) were higher than for those without IMR (39+/-6% and 30+/-5%, respectively) (both P<0.001). In multivariate analysis, independently of all baseline characteristics, particularly age and EF, the adjusted relative risks of total and cardiac mortality associated with the presence of IMR (1.88, P=0.003 and 1.83, P=0.014, respectively) and quantified degree of IMR defined by RVol >/=30 mL (2.05, P=0.002 and 2.01, P=0.009) and by ERO >/=20 mm(2) (2.23, P=0.003 and 2.38, P=0.004) were high.\n\n\nCONCLUSIONS\nIn the chronic phase after MI, IMR presence is associated with excess mortality independently of baseline characteristics and degree of ventricular dysfunction. The mortality risk is related directly to the degree of IMR as defined by ERO and RVol. Therefore, IMR detection and quantification provide major information for risk stratification and clinical decision making in the chronic post-MI phase.", "title": "" }, { "docid": "neg:1840309_15", "text": "The recognition of boundaries, e.g., between chorus and verse, is an important task in music structure analysis. The goal is to automatically detect such boundaries in audio signals so that the results are close to human annotation. In this work, we apply Convolutional Neural Networks to the task, trained directly on mel-scaled magnitude spectrograms. On a representative subset of the SALAMI structural annotation dataset, our method outperforms current techniques in terms of boundary retrieval F -measure at different temporal tolerances: We advance the state-of-the-art from 0.33 to 0.46 for tolerances of±0.5 seconds, and from 0.52 to 0.62 for tolerances of ±3 seconds. As the algorithm is trained on annotated audio data without the need of expert knowledge, we expect it to be easily adaptable to changed annotation guidelines and also to related tasks such as the detection of song transitions.", "title": "" }, { "docid": "neg:1840309_16", "text": "We describe the design, development, and API for two discourse parsers for Rhetorical Structure Theory. The two parsers use the same underlying framework, but one uses features that rely on dependency syntax, produced by a fast shift-reduce parser, whereas the other uses a richer feature space, including both constituentand dependency-syntax and coreference information, produced by the Stanford CoreNLP toolkit. Both parsers obtain state-of-the-art performance, and use a very simple API consisting of, minimally, two lines of Scala code. We accompany this code with a visualization library that runs the two parsers in parallel, and displays the two generated discourse trees side by side, which provides an intuitive way of comparing the two parsers.", "title": "" }, { "docid": "neg:1840309_17", "text": "This thesis explores fundamental improvements in unsupervised deep learning algorithms. Taking a theoretical perspective on the purpose of unsupervised learning, and choosing learnt approximate inference in a jointly learnt directed generative model as the approach, the main question is how existing implementations of this approach, in particular auto-encoders, could be improved by simultaneously rethinking the way they learn and the way they perform inference. In such network architectures, the availability of two opposing pathways, one for inference and one for generation, allows to exploit the symmetry between them and to let either provide feedback signals to the other. The signals can be used to determine helpful updates for the connection weights from only locally available information, removing the need for the conventional back-propagation path and mitigating the issues associated with it. Moreover, feedback loops can be added to the usual usual feed-forward network to improve inference itself. The reciprocal connectivity between regions in the brain’s neocortex provides inspiration for how the iterative revision and verification of proposed interpretations could result in a fair approximation to optimal Bayesian inference. While extracting and combining underlying ideas from research in deep learning and cortical functioning, this thesis walks through the concepts of generative models, approximate inference, local learning rules, target propagation, recirculation, lateral and biased competition, predictive coding, iterative and amortised inference, and other related topics, in an attempt to build up a complex of insights that could provide direction to future research in unsupervised deep learning methods.", "title": "" }, { "docid": "neg:1840309_18", "text": "This paper deals with the inter-turn short circuit fault analysis of Pulse Width Modulated (PWM) inverter fed three-phase Induction Motor (IM) using Finite Element Method (FEM). The short circuit in the stator winding of a 3-phase IM start with an inter-turn fault and if left undetected it progresses to a phase-phase fault or phase-ground fault. In main fed IM a popular technique known as Motor Current Signature Analysis (MCSA) is used to detect the inter-turn fault. But if the machine is fed from PWM inverter MCSA fails, due to high frequency inverter switching, the current spectrum will be rich in noise causing the fault detection difficult. An electromagnetic field analysis of inverter fed IM is carried out with 25% and 50% of stator winding inter-turn short circuit fault severity using FEM. The simulation is carried out on a 2.2kW IM using Ansys Maxwell Finite Element Analysis (FEA) tool. Comparisons are made on the various electromagnetic field parameters like flux lines distribution, flux density, radial air gap flux density between a healthy and faulty (25% & 50% severity) IM.", "title": "" }, { "docid": "neg:1840309_19", "text": "Drug-drug interaction (DDI) is a major cause of morbidity and mortality and a subject of intense scientific interest. Biomedical literature mining can aid DDI research by extracting evidence for large numbers of potential interactions from published literature and clinical databases. Though DDI is investigated in domains ranging in scale from intracellular biochemistry to human populations, literature mining has not been used to extract specific types of experimental evidence, which are reported differently for distinct experimental goals. We focus on pharmacokinetic evidence for DDI, essential for identifying causal mechanisms of putative interactions and as input for further pharmacological and pharmacoepidemiology investigations. We used manually curated corpora of PubMed abstracts and annotated sentences to evaluate the efficacy of literature mining on two tasks: first, identifying PubMed abstracts containing pharmacokinetic evidence of DDIs; second, extracting sentences containing such evidence from abstracts. We implemented a text mining pipeline and evaluated it using several linear classifiers and a variety of feature transforms. The most important textual features in the abstract and sentence classification tasks were analyzed. We also investigated the performance benefits of using features derived from PubMed metadata fields, various publicly available named entity recognizers, and pharmacokinetic dictionaries. Several classifiers performed very well in distinguishing relevant and irrelevant abstracts (reaching F1≈0.93, MCC≈0.74, iAUC≈0.99) and sentences (F1≈0.76, MCC≈0.65, iAUC≈0.83). We found that word bigram features were important for achieving optimal classifier performance and that features derived from Medical Subject Headings (MeSH) terms significantly improved abstract classification. We also found that some drug-related named entity recognition tools and dictionaries led to slight but significant improvements, especially in classification of evidence sentences. Based on our thorough analysis of classifiers and feature transforms and the high classification performance achieved, we demonstrate that literature mining can aid DDI discovery by supporting automatic extraction of specific types of experimental evidence.", "title": "" } ]
1840310
A task-driven approach to time scale detection in dynamic networks
[ { "docid": "pos:1840310_0", "text": "We present the design, implementation, and deployment of a wearable computing platform for measuring and analyzing human behavior in organizational settings. We propose the use of wearable electronic badges capable of automatically measuring the amount of face-to-face interaction, conversational time, physical proximity to other people, and physical activity levels in order to capture individual and collective patterns of behavior. Our goal is to be able to understand how patterns of behavior shape individuals and organizations. By using on-body sensors in large groups of people for extended periods of time in naturalistic settings, we have been able to identify, measure, and quantify social interactions, group behavior, and organizational dynamics. We deployed this wearable computing platform in a group of 22 employees working in a real organization over a period of one month. Using these automatic measurements, we were able to predict employees' self-assessments of job satisfaction and their own perceptions of group interaction quality by combining data collected with our platform and e-mail communication data. In particular, the total amount of communication was predictive of both of these assessments, and betweenness in the social network exhibited a high negative correlation with group interaction satisfaction. We also found that physical proximity and e-mail exchange had a negative correlation of r = -0.55&nbsp;(p 0.01), which has far-reaching implications for past and future research on social networks.", "title": "" }, { "docid": "pos:1840310_1", "text": "Many different machine learning algorithms exist; taking into account each algorithm's hyperparameters, there is a staggeringly large number of possible alternatives overall. We consider the problem of simultaneously selecting a learning algorithm and setting its hyperparameters, going beyond previous work that attacks these issues separately. We show that this problem can be addressed by a fully automated approach, leveraging recent innovations in Bayesian optimization. Specifically, we consider a wide range of feature selection techniques (combining 3 search and 8 evaluator methods) and all classification approaches implemented in WEKA's standard distribution, spanning 2 ensemble methods, 10 meta-methods, 27 base classifiers, and hyperparameter settings for each classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup 09, variants of the MNIST dataset and CIFAR-10, we show classification performance often much better than using standard selection and hyperparameter optimization methods. We hope that our approach will help non-expert users to more effectively identify machine learning algorithms and hyperparameter settings appropriate to their applications, and hence to achieve improved performance.", "title": "" } ]
[ { "docid": "neg:1840310_0", "text": "Future advanced driver assistance systems will contain multiple sensors that are used for several applications, such as highly automated driving on freeways. The problem is that the sensors are usually asynchronous and their data possibly out-of-sequence, making fusion of the sensor data non-trivial. This paper presents a novel approach to track-to-track fusion for automotive applications with asynchronous and out-of-sequence sensors using information matrix fusion. This approach solves the problem of correlation between sensor data due to the common process noise and common track history, which eliminates the need to replace the global track estimate with the fused local estimate at each fusion cycle. The information matrix fusion approach is evaluated in simulation and its performance demonstrated using real sensor data on a test vehicle designed for highly automated driving on freeways.", "title": "" }, { "docid": "neg:1840310_1", "text": "Community detection is an important task for mining the structure and function of complex networks. Generally, there are several different kinds of nodes in a network which are cluster nodes densely connected within communities, as well as some special nodes like hubs bridging multiple communities and outliers marginally connected with a community. In addition, it has been shown that there is a hierarchical structure in complex networks with communities embedded within other communities. Therefore, a good algorithm is desirable to be able to not only detect hierarchical communities, but also identify hubs and outliers. In this paper, we propose a parameter-free hierarchical network clustering algorithm SHRINK by combining the advantages of density-based clustering and modularity optimization methods. Based on the structural connectivity information, the proposed algorithm can effectively reveal the embedded hierarchical community structure with multiresolution in large-scale weighted undirected networks, and identify hubs and outliers as well. Moreover, it overcomes the sensitive threshold problem of density-based clustering algorithms and the resolution limit possessed by other modularity-based methods. To illustrate our methodology, we conduct experiments with both real-world and synthetic datasets for community detection, and compare with many other baseline methods. Experimental results demonstrate that SHRINK achieves the best performance with consistent improvements.", "title": "" }, { "docid": "neg:1840310_2", "text": "Recommender systems (RSs) have been the most important technology for increasing the business in Taobao, the largest online consumer-to-consumer (C2C) platform in China. There are three major challenges facing RS in Taobao: scalability, sparsity and cold start. In this paper, we present our technical solutions to address these three challenges. The methods are based on a well-known graph embedding framework. We first construct an item graph from users' behavior history, and learn the embeddings of all items in the graph. The item embeddings are employed to compute pairwise similarities between all items, which are then used in the recommendation process. To alleviate the sparsity and cold start problems, side information is incorporated into the graph embedding framework. We propose two aggregation methods to integrate the embeddings of items and the corresponding side information. Experimental results from offline experiments show that methods incorporating side information are superior to those that do not. Further, we describe the platform upon which the embedding methods are deployed and the workflow to process the billion-scale data in Taobao. Using A/B test, we show that the online Click-Through-Rates (CTRs) are improved comparing to the previous collaborative filtering based methods widely used in Taobao, further demonstrating the effectiveness and feasibility of our proposed methods in Taobao's live production environment.", "title": "" }, { "docid": "neg:1840310_3", "text": "Stress granules and processing bodies are related mRNA-containing granules implicated in controlling mRNA translation and decay. A genomic screen identifies numerous factors affecting granule formation, including proteins involved in O-GlcNAc modifications. These results highlight the importance of post-translational modifications in translational control and mRNP granule formation.", "title": "" }, { "docid": "neg:1840310_4", "text": "Songs are representation of audio signal and musical instruments. An audio signal separation system should be able to identify different audio signals such as speech, background noise and music. In a song the singing voice provides useful information regarding pitch range, music content, music tempo and rhythm. An automatic singing voice separation system is used for attenuating or removing the music accompaniment. The paper presents survey of the various algorithm and method for separating singing voice from musical background. From the survey it is observed that most of researchers used Robust Principal Component Analysis method for separation of singing voice from music background, by taking into account the rank of music accompaniment and the sparsity of singing voices.", "title": "" }, { "docid": "neg:1840310_5", "text": "Online social media plays an increasingly significant role in shaping the political discourse during elections worldwide. In the 2016 U.S. presidential election, political campaigns strategically designed candidacy announcements on Twitter to produce a significant increase in online social media attention. We use large-scale online social media communications to study the factors of party, personality, and policy in the Twitter discourse following six major presidential campaign announcements for the 2016 U.S. presidential election. We observe that all campaign announcements result in an instant bump in attention, with up to several orders of magnitude increase in tweets. However, we find that Twitter discourse as a result of this bump in attention has overwhelmingly negative sentiment. The bruising criticism, driven by crosstalk from Twitter users of opposite party affiliations, is organized by hashtags such as #NoMoreBushes and #WhyImNotVotingForHillary. We analyze how people take to Twitter to criticize specific personality traits and policy positions of presidential candidates.", "title": "" }, { "docid": "neg:1840310_6", "text": "Appearance changes due to weather and seasonal conditions represent a strong impediment to the robust implementation of machine learning systems in outdoor robotics. While supervised learning optimises a model for the training domain, it will deliver degraded performance in application domains that underlie distributional shifts caused by these changes. Traditionally, this problem has been addressed via the collection of labelled data in multiple domains or by imposing priors on the type of shift between both domains. We frame the problem in the context of unsupervised domain adaptation and develop a framework for applying adversarial techniques to adapt popular, state-of-the-art network architectures with the additional objective to align features across domains. Moreover, as adversarial training is notoriously unstable, we first perform an extensive ablation study, adapting many techniques known to stabilise generative adversarial networks, and evaluate on a surrogate classification task with the same appearance change. The distilled insights are applied to the problem of free-space segmentation for motion planning in autonomous driving.", "title": "" }, { "docid": "neg:1840310_7", "text": "Transcranial direct current stimulation (tDCS) is a promising tool for neurocognitive enhancement. Several studies have shown that just a single session of tDCS over the left dorsolateral pFC (lDLPFC) can improve the core cognitive function of working memory (WM) in healthy adults. Yet, recent studies combining multiple sessions of anodal tDCS over lDLPFC with verbal WM training did not observe additional benefits of tDCS in subsequent stimulation sessions nor transfer of benefits to novel WM tasks posttraining. Using an enhanced stimulation protocol as well as a design that included a baseline measure each day, the current study aimed to further investigate the effects of multiple sessions of tDCS on WM. Specifically, we investigated the effects of three subsequent days of stimulation with anodal (20 min, 1 mA) versus sham tDCS (1 min, 1 mA) over lDLPFC (with a right supraorbital reference) paired with a challenging verbal WM task. WM performance was measured with a verbal WM updating task (the letter n-back) in the stimulation sessions and several WM transfer tasks (different letter set n-back, spatial n-back, operation span) before and 2 days after stimulation. Anodal tDCS over lDLPFC enhanced WM performance in the first stimulation session, an effect that remained visible 24 hr later. However, no further gains of anodal tDCS were observed in the second and third stimulation sessions, nor did benefits transfer to other WM tasks at the group level. Yet, interestingly, post hoc individual difference analyses revealed that in the anodal stimulation group the extent of change in WM performance on the first day of stimulation predicted pre to post changes on both the verbal and the spatial transfer task. Notably, this relationship was not observed in the sham group. Performance of two individuals worsened during anodal stimulation and on the transfer tasks. Together, these findings suggest that repeated anodal tDCS over lDLPFC combined with a challenging WM task may be an effective method to enhance domain-independent WM functioning in some individuals, but not others, or can even impair WM. They thus call for a thorough investigation into individual differences in tDCS respondence as well as further research into the design of multisession tDCS protocols that may be optimal for boosting cognition across a wide range of individuals.", "title": "" }, { "docid": "neg:1840310_8", "text": "The number of cyber threats is constantly increasing. In 2013, 200,000 malicious tools were identified each day by antivirus vendors. This figure rose to 800,000 per day in 2014 and then to 1.8 million per day in 2016! The bar of 3 million per day will be crossed in 2017. Traditional security tools (mainly signature-based) show their limits and are less and less effective to detect these new cyber threats. Detecting never-seen-before or zero-day malware, including ransomware, efficiently requires a new approach in cyber security management. This requires a move from signature-based detection to behavior-based detection. We have developed a data breach detection system named CDS using Machine Learning techniques which is able to identify zero-day malware by analyzing the network traffic. In this paper, we present the capability of the CDS to detect zero-day ransomware, particularly WannaCry.", "title": "" }, { "docid": "neg:1840310_9", "text": "Understanding user participation is fundamental in anticipating the popularity of online content. In this paper, we explore how the number of users' comments during a short observation period after publication can be used to predict the expected popularity of articles published by a countrywide online newspaper. We evaluate a simple linear prediction model on a real dataset of hundreds of thousands of articles and several millions of comments collected over a period of four years. Analyzing the accuracy of our proposed model for different values of its basic parameters we provide valuable insights on the potentials and limitations for predicting content popularity based on early user activity.", "title": "" }, { "docid": "neg:1840310_10", "text": "In this paper, we proposed a system to effectively create music mashups – a kind of re-created music that is made by mixing parts of multiple existing music pieces. Unlike previous studies which merely generate mashups by overlaying music segments on one single base track, the proposed system creates mashups with multiple background (e.g. instrumental) and lead (e.g. vocal) track segments. So, besides the suitability between the vertically overlaid tracks (i.e. vertical mashability) used in previous studies, we proposed to further consider the suitability between the horizontally connected consecutive music segments (i.e. horizontal mashability) when searching for proper music segments to be combined. On the vertical side, two new factors: “harmonic change balance” and “volume weight” have been considered. On the horizontal side, the methods used in the studies of medley creation are incorporated. Combining vertical and horizontal mashabilities together, we defined four levels of mashability that may be encountered and found the proper solution to each of them. Subjective evaluations showed that the proposed four levels of mashability can appropriately reflect the degrees of listening enjoyment. Besides, by taking the newly proposed vertical mashability measurement into account, the improvement in user satisfaction is statistically significant.", "title": "" }, { "docid": "neg:1840310_11", "text": "Neurofeedback is attracting renewed interest as a method to self-regulate one's own brain activity to directly alter the underlying neural mechanisms of cognition and behavior. It not only promises new avenues as a method for cognitive enhancement in healthy subjects, but also as a therapeutic tool. In the current article, we present a review tutorial discussing key aspects relevant to the development of electroencephalography (EEG) neurofeedback studies. In addition, the putative mechanisms underlying neurofeedback learning are considered. We highlight both aspects relevant for the practical application of neurofeedback as well as rather theoretical considerations related to the development of new generation protocols. Important characteristics regarding the set-up of a neurofeedback protocol are outlined in a step-by-step way. All these practical and theoretical considerations are illustrated based on a protocol and results of a frontal-midline theta up-regulation training for the improvement of executive functions. Not least, assessment criteria for the validation of neurofeedback studies as well as general guidelines for the evaluation of training efficacy are discussed.", "title": "" }, { "docid": "neg:1840310_12", "text": "We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task &#x2013; predicting one subset of the data channels from another. Together, the sub-networks extract features from the entire input signal. By forcing the network to solve cross-channel prediction tasks, we induce a representation within the network which transfers well to other, unseen tasks. This method achieves state-of-the-art performance on several large-scale transfer learning benchmarks.", "title": "" }, { "docid": "neg:1840310_13", "text": "In the Internet of Things, the extreme heterogeneity of sensors, actuators and user devices calls for new tools and design models able to translate the user's needs in machine-understandable scenarios. The scientific community has proposed different solution for such issue, e.g., the MQTT (MQ Telemetry Transport) protocol introduced the topic concept as “the key that identifies the information channel to which payload data is published”. This study extends the topic approach by proposing the Web of Topics (WoX), a conceptual model for the IoT. A WoX Topic is identified by two coordinates: (i) a discrete semantic feature of interest (e.g. temperature, humidity), and (ii) a URI-based location. An IoT entity defines its role within a Topic by specifying its technological and collaborative dimensions. By this approach, it is easier to define an IoT entity as a set of couples Topic-Role. In order to prove the effectiveness of the WoX approach, we developed the WoX APIs on top of an EPCglobal implementation. Then, 10 developers were asked to build a WoX-based application supporting a physics lab scenario at school. They also filled out an ex-ante and an ex-post questionnaire. A set of qualitative and quantitative metrics allowed measuring the model's outcome.", "title": "" }, { "docid": "neg:1840310_14", "text": "The growing popularity of the JSON format has fueled increased interest in loading and processing JSON data within analytical data processing systems. However, in many applications, JSON parsing dominates performance and cost. In this paper, we present a new JSON parser called Mison that is particularly tailored to this class of applications, by pushing down both projection and filter operators of analytical queries into the parser. To achieve these features, we propose to deviate from the traditional approach of building parsers using finite state machines (FSMs). Instead, we follow a two-level approach that enables the parser to jump directly to the correct position of a queried field without having to perform expensive tokenizing steps to find the field. At the upper level, Mison speculatively predicts the logical locations of queried fields based on previously seen patterns in a dataset. At the lower level, Mison builds structural indices on JSON data to map logical locations to physical locations. Unlike all existing FSM-based parsers, building structural indices converts control flow into data flow, thereby largely eliminating inherently unpredictable branches in the program and exploiting the parallelism available in modern processors. We experimentally evaluate Mison using representative real-world JSON datasets and the TPC-H benchmark, and show that Mison produces significant performance benefits over the best existing JSON parsers; in some cases, the performance improvement is over one order of magnitude.", "title": "" }, { "docid": "neg:1840310_15", "text": "In recent years, many data mining methods have been proposed for finding useful and structured information from market basket data. The association rule model was recently proposed in order to discover useful patterns and dependencies in such data. This paper discusses a method for indexing market basket data efficiently for similarity search. The technique is likely to be very useful in applications which utilize the similarity in customer buying behavior in order to make peer recommendations. We propose an index called the signature table, which is very flexible in supporting a wide range of similarity functions. The construction of the index structure is independent of the similarity function, which can be specified at query time. The resulting similarity search algorithm shows excellent scalability with increasing memory availability and database size.", "title": "" }, { "docid": "neg:1840310_16", "text": "A well used approach for echo cancellation is the two-path method, where two adaptive filters in parallel are utilized. Typically, one filter is continuously updated, and when this filter is considered better adjusted to the echo-path than the other filter, the coefficients of the better adjusted filter is transferred to the other filter. When this transfer should occur is controlled by the transfer logic. This paper proposes transfer logic that is both more robust and more simple to tune, owing to fewer parameters, than the conventional approach. Extensive simulations show the advantages of the proposed method.", "title": "" }, { "docid": "neg:1840310_17", "text": "Spatial multicriteria decision problems are decision problems where one needs to take multiple conflicting criteria as well as geographical knowledge into account. In such a context, exploratory spatial analysis is known to provide tools to visualize as much data as possible on maps but does not integrate multicriteria aspects. Also, none of the tools provided by multicriteria analysis were initially destined to be used in a geographical context.In this paper, we propose an application of the PROMETHEE and GAIA ranking methods to Geographical Information Systems (GIS). The aim is to help decision makers obtain rankings of geographical entities and understand why such rankings have been obtained. To do that, we make use of the visual approach of the GAIA method and adapt it to display the results on geographical maps. This approach is then extended to cover several weaknesses of the adaptation. Finally, it is applied to a study of the region of Brussels as well as an evaluation of the Human Development Index (HDI) in Europe.", "title": "" }, { "docid": "neg:1840310_18", "text": "Diabetic wounds are unlike typical wounds in that they are slower to heal, making treatment with conventional topical medications an uphill process. Among several different alternative therapies, honey is an effective choice because it provides comparatively rapid wound healing. Although honey has been used as an alternative medicine for wound healing since ancient times, the application of honey to diabetic wounds has only recently been revived. Because honey has some unique natural features as a wound healer, it works even more effectively on diabetic wounds than on normal wounds. In addition, honey is known as an \"all in one\" remedy for diabetic wound healing because it can combat many microorganisms that are involved in the wound process and because it possesses antioxidant activity and controls inflammation. In this review, the potential role of honey's antibacterial activity on diabetic wound-related microorganisms and honey's clinical effectiveness in treating diabetic wounds based on the most recent studies is described. Additionally, ways in which honey can be used as a safer, faster, and effective healing agent for diabetic wounds in comparison with other synthetic medications in terms of microbial resistance and treatment costs are also described to support its traditional claims.", "title": "" }, { "docid": "neg:1840310_19", "text": "With the increasing prevalence of Web 2.0 and cloud computing, password-based logins play an increasingly important role on user-end systems. We use passwords to authenticate ourselves to countless applications and services. However, login credentials can be easily stolen by attackers. In this paper, we present a framework, TrustLogin, to secure password-based logins on commodity operating systems. TrustLogin leverages System Management Mode to protect the login credentials from malware even when OS is compromised. TrustLogin does not modify any system software in either client or server and is transparent to users, applications, and servers. We conduct two study cases of the framework on legacy and secure applications, and the experimental results demonstrate that TrustLogin is able to protect login credentials from real-world keyloggers on Windows and Linux platforms. TrustLogin is robust against spoofing attacks. Moreover, the experimental results also show TrustLogin introduces a low overhead with the tested applications.", "title": "" } ]
1840311
A Heuristics Approach for Fast Detecting Suspicious Money Laundering Cases in an Investment Bank
[ { "docid": "pos:1840311_0", "text": "In this paper, we study the problem of applying data mining to facilitate the investigation of money laundering crimes (MLCs). We have identified a new paradigm of problems --- that of automatic community generation based on uni-party data, the data in which there is no direct or explicit link information available. Consequently, we have proposed a new methodology for Link Discovery based on Correlation Analysis (LDCA). We have used MLC group model generation as an exemplary application of this problem paradigm, and have focused on this application to develop a specific method of automatic MLC group model generation based on timeline analysis using the LDCA methodology, called CORAL. A prototype of CORAL method has been implemented, and preliminary testing and evaluations based on a real MLC case data are reported. The contributions of this work are: (1) identification of the uni-party data community generation problem paradigm, (2) proposal of a new methodology LDCA to solve for problems in this paradigm, (3) formulation of the MLC group model generation problem as an example of this paradigm, (4) application of the LDCA methodology in developing a specific solution (CORAL) to the MLC group model generation problem, and (5) development, evaluation, and testing of the CORAL prototype in a real MLC case data.", "title": "" }, { "docid": "pos:1840311_1", "text": "This text is intended to provide a balanced introduction to machine vision. Basic concepts are introduced with only essential mathematical elements. The details to allow implementation and use of vision algorithm in practical application are provided, and engineering aspects of techniques are emphasized. This text intentionally omits theories of machine vision that do not have sufficient practical applications at the time.", "title": "" } ]
[ { "docid": "neg:1840311_0", "text": "Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maximum likelihood and teacher forcing. These methods are well-suited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.", "title": "" }, { "docid": "neg:1840311_1", "text": "We present a generative model for the unsupervised learning of dependency structures. We also describe the multiplicative combination of this dependency model with a model of linear constituency. The product model outperforms both components on their respective evaluation metrics, giving the best published figures for unsupervised dependency parsing and unsupervised constituency parsing. We also demonstrate that the combined model works and is robust cross-linguistically, being able to exploit either attachment or distributional reg ularities that are salient in the data.", "title": "" }, { "docid": "neg:1840311_2", "text": "This paper describes a new maximum-power-point-tracking method for a photovoltaic system based on the Lagrange Interpolation Formula and proposes the particle swarm optimization method. The proposed control scheme eliminates the problems of conventional methods by using only a simple numerical calculation to initialize the particles around the global maximum power point. Hence, the suggested control scheme will utilize less iterations to reach the maximum power point. Simulation study is carried out using MATLAB/SIMULINK and compared with the Perturb and Observe method, the Incremental Conductance method, and the conventional Particle Swarm Optimization algorithm. The proposed algorithm is verified with the OPAL-RT real-time simulator. The simulation results confirm that the proposed algorithm can effectively enhance the stability and the fast tracking capability under abnormal insolation conditions.", "title": "" }, { "docid": "neg:1840311_3", "text": "This research developed and tested a model of turnover contagion in which the job embeddedness and job search behaviors of coworkers influence employees’ decisions to quit. In a sample of 45 branches of a regional bank and 1,038 departments of a national hospitality firm, multilevel analysis revealed that coworkers’ job embeddedness and job search behaviors explain variance in individual “voluntary turnover” over and above that explained by other individual and group-level predictors. Broadly speaking, these results suggest that coworkers’ job embeddedness and job search behaviors play critical roles in explaining why people quit their jobs. Implications are discussed.", "title": "" }, { "docid": "neg:1840311_4", "text": "Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction. To better understand such attacks, a characterization is needed of the properties of regions (the so-called ‘adversarial subspaces’) in which adversarial examples lie. We tackle this challenge by characterizing the dimensional properties of adversarial regions, via the use of Local Intrinsic Dimensionality (LID). LID assesses the space-filling capability of the region surrounding a reference example, based on the distance distribution of the example to its neighbors. We first provide explanations about how adversarial perturbation can affect the LID characteristic of adversarial regions, and then show empirically that LID characteristics can facilitate the distinction of adversarial examples generated using state-of-the-art attacks. As a proof-of-concept, we show that a potential application of LID is to distinguish adversarial examples, and the preliminary results show that it can outperform several state-of-the-art detection measures by large margins for five attack strategies considered in this paper across three benchmark datasets . Our analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs.", "title": "" }, { "docid": "neg:1840311_5", "text": "Plants are natural producers of chemical substances, providing potential treatment of human ailments since ancient times. Some herbal chemicals in medicinal plants of traditional and modern medicine carry the risk of herb induced liver injury (HILI) with a severe or potentially lethal clinical course, and the requirement of a liver transplant. Discontinuation of herbal use is mandatory in time when HILI is first suspected as diagnosis. Although, herbal hepatotoxicity is of utmost clinical and regulatory importance, lack of a stringent causality assessment remains a major issue for patients with suspected HILI, while this problem is best overcome by the use of the hepatotoxicity specific CIOMS (Council for International Organizations of Medical Sciences) scale and the evaluation of unintentional reexposure test results. Sixty five different commonly used herbs, herbal drugs, and herbal supplements and 111 different herbs or herbal mixtures of the traditional Chinese medicine (TCM) are reported causative for liver disease, with levels of causality proof that appear rarely conclusive. Encouraging steps in the field of herbal hepatotoxicity focus on introducing analytical methods that identify cases of intrinsic hepatotoxicity caused by pyrrolizidine alkaloids, and on omics technologies, including genomics, proteomics, metabolomics, and assessing circulating micro-RNA in the serum of some patients with intrinsic hepatotoxicity. It remains to be established whether these new technologies can identify idiosyncratic HILI cases. To enhance its globalization, herbal medicine should universally be marketed as herbal drugs under strict regulatory surveillance in analogy to regulatory approved chemical drugs, proving a positive risk/benefit profile by enforcing evidence based clinical trials and excellent herbal drug quality.", "title": "" }, { "docid": "neg:1840311_6", "text": "We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection.", "title": "" }, { "docid": "neg:1840311_7", "text": "■ Lincoln Laboratory led the nation in the development of high-power wideband radar with a unique capability for resolving target scattering centers and producing three-dimensional images of individual targets. The Laboratory fielded the first wideband radar, called ALCOR, in 1970 at Kwajalein Atoll. Since 1970 the Laboratory has developed and fielded several other wideband radars for use in ballistic-missile-defense research and space-object identification. In parallel with these radar systems, the Laboratory has developed high-capacity, high-speed signal and data processing techniques and algorithms that permit generation of target images and derivation of other target features in near real time. It has also pioneered new ways to realize improved resolution and scatterer-feature identification in wideband radars by the development and application of advanced signal processing techniques. Through the analysis of dynamic target images and other wideband observables, we can acquire knowledge of target form, structure, materials, motion, mass distribution, identifying features, and function. Such capability is of great benefit in ballistic missile decoy discrimination and in space-object identification.", "title": "" }, { "docid": "neg:1840311_8", "text": "This paper presents a tutorial on delta-sigma fractional-N PLLs for frequency synthesis. The presentation assumes the reader has a working knowledge of integer-N PLLs. It builds on this knowledge by introducing the additional concepts required to understand ΔΣ fractional-N PLLs. After explaining the limitations of integerN PLLs with respect to tuning resolution, the paper introduces the delta-sigma fractional-N PLL as a means of avoiding these limitations. It then presents a selfcontained explanation of the relevant aspects of deltasigma modulation, an extension of the well known integerN PLL linearized model to delta-sigma fractional-N PLLs, a design example, and techniques for wideband digital modulation of the VCO within a delta-sigma fractional-N PLL.", "title": "" }, { "docid": "neg:1840311_9", "text": "A novel compact-size branch-line coupler using composite right/left-handed transmission lines is proposed in this paper. In order to obtain miniaturization, composite right/left-handed transmission lines with novel complementary split single ring resonators which are realized by loading a pair of meander-shaped-slots in the split of the ring are designed. This novel coupler occupies only 22.8% of the area of the conventional approach at 0.7 GHz. The proposed coupler can be implemented by using the standard printed-circuit-board etching processes without any implementation of lumped elements and via-holes, making it very useful for wireless communication systems. The agreement between measured and stimulated results validates the feasible configuration of the proposed coupler.", "title": "" }, { "docid": "neg:1840311_10", "text": "This study focuses on how technology can encourage and ease awkwardness-free communications between people in real-world scenarios. We propose a device, The Wearable Aura, able to project a personalized animation onto one's Personal Distance zone. This projection, as an extension of one-self is reactive to user's cognitive status, aware of its environment, context and user's activity. Our user study supports the idea that an interactive projection around an individual can indeed benefit the communications with other individuals.", "title": "" }, { "docid": "neg:1840311_11", "text": "An approach is proposed to estimate the location, velocity, and acceleration of a target vehicle to avoid a possible collision. Radial distance, velocity, and acceleration are extracted from the hybrid linear frequency modulation (LFM)/frequency-shift keying (FSK) echoed signals and then processed using the Kalman filter and the trilateration process. This approach proves to converge fast with good accuracy. Two other approaches, i.e., an extended Kalman filter (EKF) and a two-stage Kalman filter (TSKF), are used as benchmarks for comparison. Several scenarios of vehicle movement are also presented to demonstrate the effectiveness of this approach.", "title": "" }, { "docid": "neg:1840311_12", "text": "The cyber world provides an anonymous environment for criminals to conduct malicious activities such as spamming, sending ransom e-mails, and spreading botnet malware. Often, these activities involve textual communication between a criminal and a victim, or between criminals themselves. The forensic analysis of online textual documents for addressing the anonymity problem called authorship analysis is the focus of most cybercrime investigations. Authorship analysis is the statistical study of linguistic and computational characteristics of the written documents of individuals. This paper is the first work that presents a unified data mining solution to address authorship analysis problems based on the concept of frequent pattern-based writeprint. Extensive experiments on real-life data suggest that our proposed solution can precisely capture the writing styles of individuals. Furthermore, the writeprint is effective to identify the author of an anonymous text from ∗Corresponding author Email addresses: iqbal_f@ciise.concordia.ca (Farkhund Iqbal), h_binsal@ciise.concordia.ca (Hamad Binsalleeh), fung@ciise.concordia.ca (Benjamin C. M. Fung), debbabi@ciise.concordia.ca (Mourad Debbabi) Preprint submitted to Information Sciences March 10, 2011 a group of suspects and to infer sociolinguistic characteristics of the author.", "title": "" }, { "docid": "neg:1840311_13", "text": "Detection of defects in induction machine rotor bars for unassembled motors is required to evaluate machines considered for repair as well as fulfilling incremental quality assurance checks in the manufacture of new machines. Detection of rotor bar defects prior to motor assembly are critical in increasing repair efficiency and assuring the quality of newly manufactured machines. Many methods of detecting rotor bar defects in unassembled motors lack the sensitivity to find both major and minor defects in both cast and fabricated rotors along with additional deficiencies in quantifiable test results and arc-flash safety hazards. A process of direct magnetic field analysis can examine measurements from induced currents in a rotor separated from its stator yielding a high-resolution fingerprint of a rotor's magnetic field. This process identifies both major and minor rotor bar defects in a repeatable and quantifiable manner appropriate for numerical evaluation without arc-flash safety hazards.", "title": "" }, { "docid": "neg:1840311_14", "text": "A novel planar end-fire circularly polarized (CP) complementary Yagi array antenna is proposed. The antenna has a compact and complementary structure, and exhibits excellent properties (low profile, single feed, broadband, high gain, and CP radiation). It is based on a compact combination of a pair of complementary Yagi arrays with a common driven element. In the complementary structure, the vertical polarization is contributed by a microstrip patch Yagi array, while the horizontal polarization is yielded by a strip dipole Yagi array. With the combination of the two orthogonally polarized Yagi arrays, a CP antenna with high gain and wide bandwidth is obtained. With a profile of <inline-formula> <tex-math notation=\"LaTeX\">$0.05\\lambda _{\\mathrm{0}}$ </tex-math></inline-formula> (3 mm), the antenna has a gain of about 8 dBic, an impedance bandwidth (<inline-formula> <tex-math notation=\"LaTeX\">$\\vert S_{11}\\vert < -10 $ </tex-math></inline-formula> dB) of 13.09% (4.57–5.21 GHz) and a 3-dB axial-ratio bandwidth of 10.51% (4.69–5.21 GHz).", "title": "" }, { "docid": "neg:1840311_15", "text": "In this paper, we describe a positioning control for a SCARA robot using a recurrent neural network. The simultaneous perturbation optimization method is used for the learning rule of the recurrent neural network. Then the recurrent neural network learns inverse dynamics of the SCARA robot. We present details of the control scheme using the simultaneous perturbation. Moreover, we consider an example for two target positions using an actual SCARA robot. The result is shown.", "title": "" }, { "docid": "neg:1840311_16", "text": "Question Answering is a task which requires building models capable of providing answers to questions expressed in human language. Full question answering involves some form of reasoning ability. We introduce a neural network architecture for this task, which is a form of Memory Network, that recognizes entities and their relations to answers through a focus attention mechanism. Our model is named Question Dependent Recurrent Entity Network and extends Recurrent Entity Network by exploiting aspects of the question during the memorization process. We validate the model on both synthetic and real datasets: the bAbI question answering dataset and the CNN & Daily News reading comprehension dataset. In our experiments, the models achieved a State-ofThe-Art in the former and competitive results in the latter.", "title": "" }, { "docid": "neg:1840311_17", "text": "INTRODUCTION\nBirth preparedness and complication readiness (BPCR) is a strategy to promote timely use of skilled maternal and neonatal care during childbirth. According to World Health Organization, BPCR should be a key component of focused antenatal care. Dakshina Kannada, a coastal district of Karnataka state, is categorized as a high-performing district (institutional delivery rate >25%) under the National Rural Health Mission. However, a substantial proportion of women in the district experience complications during pregnancy (58.3%), childbirth (45.7%), and postnatal (17.4%) period. There is a paucity of data on BPCR practice and the factors associated with it in the district. Exploring this would be of great use in the evidence-based fine-tuning of ongoing maternal and child health interventions.\n\n\nOBJECTIVE\nTo assess BPCR practice and the factors associated with it among the beneficiaries of two rural Primary Health Centers (PHCs) of Dakshina Kannada district, Karnataka, India.\n\n\nMETHODS\nA facility-based cross-sectional study was conducted among 217 pregnant (>28 weeks of gestation) and recently delivered (in the last 6 months) women in two randomly selected PHCs from June -September 2013. Exit interviews were conducted using a pre-designed semi-structured interview schedule. Information regarding socio-demographic profile, obstetric variables, and knowledge of key danger signs was collected. BPCR included information on five key components: identified the place of delivery, saved money to pay for expenses, mode of transport identified, identified a birth companion, and arranged a blood donor if the need arises. In this study, a woman who recalled at least two key danger signs in each of the three phases, i.e., pregnancy, childbirth, and postpartum (total six) was considered as knowledgeable on key danger signs. Optimal BPCR practice was defined as following at least three out of five key components of BPCR.\n\n\nOUTCOME MEASURES\nProportion, Odds ratio, and adjusted Odds ratio (adj OR) for optimal BPCR practice.\n\n\nRESULTS\nA total of 184 women completed the exit interview (mean age: 26.9±3.9 years). Optimal BPCR practice was observed in 79.3% (95% CI: 73.5-85.2%) of the women. Multivariate logistic regression revealed that age >26 years (adj OR = 2.97; 95%CI: 1.15-7.7), economic status of above poverty line (adj OR = 4.3; 95%CI: 1.12-16.5), awareness of minimum two key danger signs in each of the three phases, i.e., pregnancy, childbirth, and postpartum (adj OR = 3.98; 95%CI: 1.4-11.1), preference to private health sector for antenatal care/delivery (adj OR = 2.9; 95%CI: 1.1-8.01), and woman's discussion about the BPCR with her family members (adj OR = 3.4; 95%CI: 1.1-10.4) as the significant factors associated with optimal BPCR practice.\n\n\nCONCLUSION\nIn this study population, BPCR practice was better than other studies reported from India. Healthcare workers at the grassroots should be encouraged to involve women's family members while explaining BPCR and key danger signs with a special emphasis on young (<26 years) and economically poor women. Ensuring a reinforcing discussion between woman and her family members may further enhance the BPCR practice.", "title": "" }, { "docid": "neg:1840311_18", "text": "Budget optimization is one of the primary decision-making issues faced by advertisers in search auctions. A quality budget optimization strategy can significantly improve the effectiveness of search advertising campaigns, thus helping advertisers to succeed in the fierce competition of online marketing. This paper investigates budget optimization problems in search advertisements and proposes a novel hierarchical budget optimization framework (BOF), with consideration of the entire life cycle of advertising campaigns. Then, we formulated our BOF framework, made some mathematical analysis on some desirable properties, and presented an effective solution algorithm. Moreover, we established a simple but illustrative instantiation of our BOF framework which can help advertisers to allocate and adjust the budget of search advertising campaigns. Our BOF framework provides an open testbed environment for various strategies of budget allocation and adjustment across search advertising markets. With field reports and logs from real-world search advertising campaigns, we designed some experiments to evaluate the effectiveness of our BOF framework and instantiated strategies. Experimental results are quite promising, where our BOF framework and instantiated strategies perform better than two baseline budget strategies commonly used in practical advertising campaigns.", "title": "" }, { "docid": "neg:1840311_19", "text": "Recent years have witnessed the fast development of UAVs (unmanned aerial vehicles). As an alternative to traditional image acquisition methods, UAVs bridge the gap between terrestrial and airborne photogrammetry and enable flexible acquisition of high resolution images. However, the georeferencing accuracy of UAVs is still limited by the low-performance on-board GNSS and INS. This paper investigates automatic geo-registration of an individual UAV image or UAV image blocks by matching the UAV image(s) with a previously taken georeferenced image, such as an individual aerial or satellite image with a height map attached or an aerial orthophoto with a DSM (digital surface model) attached. As the biggest challenge for matching UAV and aerial images is in the large differences in scale and rotation, we propose a novel feature matching method for nadir or slightly tilted images. The method is comprised of a dense feature detection scheme, a one-to-many matching strategy and a global geometric verification scheme. The proposed method is able to find thousands of valid matches in cases where SIFT and ASIFT fail. Those matches can be used to geo-register the whole UAV image block towards the reference image data. When the reference images offer high georeferencing accuracy, the UAV images can also be geolocalized in a global coordinate system. A series of experiments involving different scenarios was conducted to validate the proposed method. The results demonstrate that our approach achieves not only decimeter-level registration accuracy, but also comparable global accuracy as the reference images.", "title": "" } ]
1840312
A Learning-based Neural Network Model for the Detection and Classification of SQL Injection Attacks
[ { "docid": "pos:1840312_0", "text": "The use of web applications has become increasingly popular in our routine activities, such as reading the news, paying bills, and shopping on-line. As the availability of these services grows, we are witnessing an increase in the number and sophistication of attacks that target them. In particular, SQL injection, a class of code-injection attacks in which specially crafted input strings result in illegal queries to a database, has become one of the most serious threats to web applications. In this paper we present and evaluate a new technique for detecting and preventing SQL injection attacks. Our technique uses a model-based approach to detect illegal queries before they are executed on the database. In its static part, the technique uses program analysis to automatically build a model of the legitimate queries that could be generated by the application. In its dynamic part, the technique uses runtime monitoring to inspect the dynamically-generated queries and check them against the statically-built model. We developed a tool, AMNESIA, that implements our technique and used the tool to evaluate the technique on seven web applications. In the evaluation we targeted the subject applications with a large number of both legitimate and malicious inputs and measured how many attacks our technique detected and prevented. The results of the study show that our technique was able to stop all of the attempted attacks without generating any false positives.", "title": "" } ]
[ { "docid": "neg:1840312_0", "text": "When human agents come together to make decisions it is often the case that one human agent has more information than the other and this phenomenon is called information asymmetry and this distorts the market. Often if one human agent intends to manipulate a decision in its favor the human agent can signal wrong or right information. Alternatively, one human agent can screen for information to reduce the impact of asymmetric information on decisions. With the advent of artificial intelligence, signaling and screening have been made easier. This chapter studies the impact of artificial intelligence on the theory of asymmetric information. It is surmised that artificial intelligent agents reduce the degree of information asymmetry and thus the market where these agents are deployed become more efficient. It is also postulated that the more artificial intelligent agents there are deployed in the market the less is the volume of trades in the market. This is because for trade to happen the asymmetry of information on goods and services to be traded should exist.", "title": "" }, { "docid": "neg:1840312_1", "text": "This paper describes our proposed solutions designed for a STS core track within the SemEval 2016 English Semantic Textual Similarity (STS) task. Our method of similarity detection combines recursive autoencoders with a WordNet award-penalty system that accounts for semantic relatedness, and an SVM classifier, which produces the final score from similarity matrices. This solution is further supported by an ensemble classifier, combining an aligner with a bi-directional Gated Recurrent Neural Network and additional features, which then performs Linear Support Vector Regression to determine another set of scores.", "title": "" }, { "docid": "neg:1840312_2", "text": "Many people rely on Web-based tutorials to learn how to use complex software. Yet, it remains difficult for users to systematically explore the set of tutorials available online. We present Sifter, an interface for browsing, comparing and analyzing large collections of image manipulation tutorials based on their command-level structure. Sifter first applies supervised machine learning to identify the commands contained in a collection of 2500 Photoshop tutorials obtained from the Web. It then provides three different views of the tutorial collection based on the extracted command-level structure: (1) A Faceted Browser View allows users to organize, sort and filter the collection based on tutorial category, command names or on frequently used command subsequences, (2) a Tutorial View summarizes and indexes tutorials by the commands they contain, and (3) an Alignment View visualizes the commandlevel similarities and differences between a subset of tutorials. An informal evaluation (n=9) suggests that Sifter enables users to successfully perform a variety of browsing and analysis tasks that are difficult to complete with standard keyword search. We conclude with a meta-analysis of our Photoshop tutorial collection and present several implications for the design of image manipulation software. ACM Classification H5.2 [Information interfaces and presentation]: User Interfaces. Graphical user interfaces. Author", "title": "" }, { "docid": "neg:1840312_3", "text": "We explore unsupervised approaches to relation extraction between two named entities; for instance, the semantic bornIn relation between a person and location entity. Concretely, we propose a series of generative probabilistic models, broadly similar to topic models, each which generates a corpus of observed triples of entity mention pairs and the surface syntactic dependency path between them. The output of each model is a clustering of observed relation tuples and their associated textual expressions to underlying semantic relation types. Our proposed models exploit entity type constraints within a relation as well as features on the dependency path between entity mentions. We examine effectiveness of our approach via multiple evaluations and demonstrate 12% error reduction in precision over a state-of-the-art weakly supervised baseline.", "title": "" }, { "docid": "neg:1840312_4", "text": "Deep representations, in particular ones implemented by convolutional neural networks, have led to good progress on many learning problems. However, the learned representations are hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study deep image representations by inverting them with an up-convolutional neural network. Application of this method to a deep network trained on ImageNet provides numerous insights into the properties of the feature representation. Most strikingly, the colors and the rough contours of an input image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.", "title": "" }, { "docid": "neg:1840312_5", "text": "The alerts produced by network-based intrusion detection systems, e.g. Snort, can be difficult for network administrators to efficiently review and respond to due to the enormous number of alerts generated in a short time frame. This work describes how the visualization of raw IDS alert data assists network administrators in understanding the current state of a network and quickens the process of reviewing and responding to intrusion attempts. The project presented in this work consists of three primary components. The first component provides a visual mapping of the network topology that allows the end-user to easily browse clustered alerts. The second component is based on the flocking behavior of birds such that birds tend to follow other birds with similar behaviors. This component allows the end-user to see the clustering process and provides an efficient means for reviewing alert data. The third component discovers and visualizes patterns of multistage attacks by profiling the attacker’s behaviors.", "title": "" }, { "docid": "neg:1840312_6", "text": "This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity analysis based algorithm with content analysis. We identify three problems with the existing approach and devise algorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45% over pure connectivity analysis.", "title": "" }, { "docid": "neg:1840312_7", "text": "We present an extension to the Tacotron speech synthesis architecture that learns a latent embedding space of prosody, derived from a reference acoustic representation containing the desired prosody. We show that conditioning Tacotron on this learned embedding space results in synthesized audio that matches the prosody of the reference signal with fine time detail even when the reference and synthesis speakers are different. Additionally, we show that a reference prosody embedding can be used to synthesize text that is different from that of the reference utterance. We define several quantitative and subjective metrics for evaluating prosody transfer, and report results with accompanying audio samples from single-speaker and 44-speaker Tacotron models on a prosody transfer task.", "title": "" }, { "docid": "neg:1840312_8", "text": "Methods for Named Entity Recognition and Disambiguation (NERD) perform NER and NED in two separate stages. Therefore, NED may be penalized with respect to precision by NER false positives, and suffers in recall from NER false negatives. Conversely, NED does not fully exploit information computed by NER such as types of mentions. This paper presents J-NERD, a new approach to perform NER and NED jointly, by means of a probabilistic graphical model that captures mention spans, mention types, and the mapping of mentions to entities in a knowledge base. We present experiments with different kinds of texts from the CoNLL’03, ACE’05, and ClueWeb’09-FACC1 corpora. J-NERD consistently outperforms state-of-the-art competitors in end-to-end NERD precision, recall, and F1.", "title": "" }, { "docid": "neg:1840312_9", "text": "OBJECTIVE\nEffect of peppermint on exercise performance was previously investigated but equivocal findings exist. This study aimed to investigate the effects of peppermint ingestion on the physiological parameters and exercise performance after 5 min and 1 h.\n\n\nMATERIALS AND METHODS\nThirty healthy male university students were randomly divided into experimental (n=15) and control (n=15) groups. Maximum isometric grip force, vertical and long jumps, spirometric parameters, visual and audio reaction times, blood pressure, heart rate, and breath rate were recorded three times: before, five minutes, and one hour after single dose oral administration of peppermint essential oil (50 µl). Data were analyzed using repeated measures ANOVA.\n\n\nRESULTS\nOur results revealed significant improvement in all of the variables after oral administration of peppermint essential oil. Experimental group compared with control group showed an incremental and a significant increase in the grip force (36.1%), standing vertical jump (7.0%), and standing long jump (6.4%). Data obtained from the experimental group after five minutes exhibited a significant increase in the forced vital capacity in first second (FVC1)(35.1%), peak inspiratory flow rate (PIF) (66.4%), and peak expiratory flow rate (PEF) (65.1%), whereas after one hour, only PIF shown a significant increase as compare with the baseline and control group. At both times, visual and audio reaction times were significantly decreased. Physiological parameters were also significantly improved after five minutes. A considerable enhancement in the grip force, spiromery, and other parameters were the important findings of this study. Conclusion : An improvement in the spirometric measurements (FVC1, PEF, and PIF) might be due to the peppermint effects on the bronchial smooth muscle tonicity with or without affecting the lung surfactant. Yet, no scientific evidence exists regarding isometric force enhancement in this novel study.", "title": "" }, { "docid": "neg:1840312_10", "text": "One of the most distinctive linguistic characteristics of modern academic writing is its reliance on nominalized structures. These include nouns that have been morphologically derived from verbs (e.g., development, progression) as well as verbs that have been ‘converted’ to nouns (e.g., increase, use). Almost any sentence taken from an academic research article will illustrate the use of such structures. For example, consider the opening sentences from three education research articles; derived nominalizations are underlined and converted nouns given in italics: 1", "title": "" }, { "docid": "neg:1840312_11", "text": "This paper investigates the design of power and spectrally efficient coded modulations based on amplitude phase shift keying (APSK) with application to broadband satellite communications. Emphasis is put on 64APSK constellations. The APSK modulation has merits for digital transmission over nonlinear satellite channels due to its power and spectral efficiency combined with its inherent robustness against nonlinear distortion. This scheme has been adopted in the DVB-S2 Standard for satellite digital video broadcasting. Assuming an ideal rectangular transmission pulse, for which no nonlinear inter-symbol interference is present and perfect pre-compensation of the nonlinearity takes place, we optimize the 64APSK constellation design by employing an optimization criterion based on the mutual information. This method generates an optimum constellation for each operating SNR point, that is, for each spectral efficiency. Two separate cases of interest are particularly examined: (i) the equiprobable case, where all constellation points are equiprobable and (ii) the non-equiprobable case, where the constellation points on each ring are assumed to be equiprobable but the a priory symbol probability associated per ring is assumed different for each ring. Following the mutual information-based optimization approach in each case, detailed simulation results are obtained for the optimal 64APSK constellation settings as well as the achievable shaping gain.", "title": "" }, { "docid": "neg:1840312_12", "text": "As the social networking sites get more popular, spammers target these sites to spread spam posts. Twitter is one of the most popular online social networking sites where users communicate and interact on various topics. Most of the current spam filtering methods in Twitter focus on detecting the spammers and blocking them. However, spammers can create a new account and start posting new spam tweets again. So there is a need for robust spam detection techniques to detect the spam at tweet level. These types of techniques can prevent the spam in real time. To detect the spam at tweet level, often features are defined, and appropriate machine learning algorithms are applied in the literature. Recently, deep learning methods are showing fruitful results on several natural language processing tasks. We want to use the potential benefits of these two types of methods for our problem. Toward this, we propose an ensemble approach for spam detection at tweet level. We develop various deep learning models based on convolutional neural networks (CNNs). Five CNNs and one feature-based model are used in the ensemble. Each CNN uses different word embeddings (Glove, Word2vec) to train the model. The feature-based model uses content-based, user-based, and n-gram features. Our approach combines both deep learning and traditional feature-based models using a multilayer neural network which acts as a meta-classifier. We evaluate our method on two data sets, one data set is balanced, and another one is imbalanced. The experimental results show that our proposed method outperforms the existing methods.", "title": "" }, { "docid": "neg:1840312_13", "text": "A voltage reference circuit operating with all transistors biased in weak inversion, providing a mean reference voltage of 257.5 mV, has been fabricated in 0.18 m CMOS technology. The reference voltage can be approximated by the difference of transistor threshold voltages at room temperature. Accurate subthreshold design allows the circuit to work at room temperature with supply voltages down to 0.45 V and an average current consumption of 5.8 nA. Measurements performed over a set of 40 samples showed an average temperature coefficient of 165 ppm/ C with a standard deviation of 100 ppm/ C, in a temperature range from 0 to 125°C. The mean line sensitivity is ≈0.44%/V, for supply voltages ranging from 0.45 to 1.8 V. The power supply rejection ratio measured at 30 Hz and simulated at 10 MHz is lower than -40 dB and -12 dB, respectively. The active area of the circuit is ≈0.043mm2.", "title": "" }, { "docid": "neg:1840312_14", "text": "Supervised learning, e.g., classification, plays an important role in processing and organizing microblogging data. In microblogging, it is easy to mass vast quantities of unlabeled data, but would be costly to obtain labels, which are essential for supervised learning algorithms. In order to reduce the labeling cost, active learning is an effective way to select representative and informative instances to query for labels for improving the learned model. Different from traditional data in which the instances are assumed to be independent and identically distributed (i.i.d.), instances in microblogging are networked with each other. This presents both opportunities and challenges for applying active learning to microblogging data. Inspired by social correlation theories, we investigate whether social relations can help perform effective active learning on networked data. In this paper, we propose a novel Active learning framework for the classification of Networked Texts in microblogging (ActNeT). In particular, we study how to incorporate network information into text content modeling, and design strategies to select the most representative and informative instances from microblogging for labeling by taking advantage of social network structure. Experimental results on Twitter datasets show the benefit of incorporating network information in active learning and that the proposed framework outperforms existing state-of-the-art methods.", "title": "" }, { "docid": "neg:1840312_15", "text": "A fully-integrated silicon-based 94-GHz direct-detection imaging receiver with on-chip Dicke switch and baseband circuitry is demonstrated. Fabricated in a 0.18-µm SiGe BiCMOS technology (fT/fMAX = 200 GHz), the receiver chip achieves a peak imager responsivity of 43 MV/W with a 3-dB bandwidth of 26 GHz. A balanced LNA topology with an embedded Dicke switch provides 30-dB gain and enables a temperature resolution of 0.3–0.4 K. The imager chip consumes 200 mW from a 1.8-V supply.", "title": "" }, { "docid": "neg:1840312_16", "text": "This paper presents a novel multiple-frequency resonant inverter for induction heating (IH) applications. By adopting a center tap transformer, the proposed resonant inverter can give load switching frequency as twice as the isolated-gate bipolar transistor (IGBT) switching frequency. The structure and the operation of the proposed topology are described in order to demonstrate how the output frequency of the proposed resonant inverter is as twice as the switching frequency of IGBTs. In addition to this, the IGBTs in the proposed topology work in zero-voltage switching during turn-on phase of the switches. The new topology is verified by the experimental results using a prototype for IH applications. Moreover, increased efficiency of the proposed inverter is verified by comparison with conventional designs.", "title": "" }, { "docid": "neg:1840312_17", "text": "This research applied both the traditional and the fuzzy control methods for mobile satellite antenna tracking system design. The antenna tracking and the stabilization loops were designed firstly according to the bandwidth and phase margin requirements. However, the performance would be degraded if the tracking loop gain is reduced due to parameter variation. On the other hand a PD type of fuzzy controller was also applied for tracking loop design. It can be seen that the system performance obtained by the fuzzy controller was better for low antenna tracking gain. Thus this research proposed an adaptive law by taking either traditional or fuzzy controllers for antenna tracking system depending on the tracking loop gain, then the tracking gain parameter variation effect can be reduced.", "title": "" }, { "docid": "neg:1840312_18", "text": "This paper demonstrates key capabilities of Cognitive Database, a novel AI-enabled relational database system which uses an unsupervised neural network model to facilitate semantic queries over relational data. The neural network model, called word embedding, operates on an unstructured view of the database and builds a vector model that captures latent semantic context of database entities of different types. The vector model is then seamlessly integrated into the SQL infrastructure and exposed to the users via a new class of SQL-based analytics queries known as cognitive intelligence (CI) queries. The cognitive capabilities enable complex queries over multi-modal data such as semantic matching, inductive reasoning queries such as analogies, and predictive queries using entities not present in a database. We plan to demonstrate the end-to-end execution flow of the cognitive database using a Spark based prototype. Furthermore, we demonstrate the use of CI queries using a publicaly available enterprise financial dataset (with text and numeric values). A Jupyter Notebook python based implementation will also be presented.", "title": "" } ]