_id
stringlengths 40
40
| text
stringlengths 0
10k
|
---|---|
5c1ab86362dec6f9892a5e4055a256fa5c1772af | The specification of the long term evolution (LTE) of 3G systems is currently ongoing in 3GPP with a target date of ready specification at the end of 2007. The evolved radio access network (RAN) involves a new radio interface based on OFDM technology and a radically different RAN architecture, where radio functionality is distributed into the base stations. The distributed nature of the RAN architecture calls for new radio control algorithms and procedures that operate in a distributed manner, including a distributed handover scheme as well. The most important aspects of the handover procedure in LTE has been already settled in 3GPP except a few details. In this paper we give an overview of the LTE intra-access handover procedure and evaluate its performance focusing on the user perceived performance aspects of it. We investigate the necessity of packet forwarding from a TCP throughput point of view, we analyse the problem of out of order packet delivery during handover and propose a simple solution for it. Finally, we investigate the impact of HARQ/ARQ state discard at handover on the radio efficiency. The results show that neither the user perceived performance nor the radio efficiency are compromised by the relocation based handover procedure of LTE. |
3fb91bbffa86733fc68d4145e7f081353eb3dcd8 | Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications. |
ab71da348979c50d33700bc2f6ddcf25b4c8cfd0 | |
6cc4a3d0d8a278d30e05418afeaf6b8e5d04d3d0 | |
6180a8a082c3d0e85dcb9cec3677923ff7633bb9 | S the inauguration of information systems research (ISR) two decades ago, the information systems (IS) field’s attention has moved beyond administrative systems and individual tools. Millions of users log onto Facebook, download iPhone applications, and use mobile services to create decentralized work organizations. Understanding these new dynamics will necessitate the field paying attention to digital infrastructures as a category of IT artifacts. A state-of-the-art review of the literature reveals a growing interest in digital infrastructures but also confirms that the field has yet to put infrastructure at the centre of its research endeavor. To assist this shift we propose three new directions for IS research: (1) theories of the nature of digital infrastructure as a separate type of IT artifact, sui generis; (2) digital infrastructures as relational constructs shaping all traditional IS research areas; (3) paradoxes of change and control as salient IS phenomena. We conclude with suggestions for how to study longitudinal, large-scale sociotechnical phenomena while striving to remain attentive to the limitations of the traditional categories that have guided IS research. |
e83a2fa459ba921fb176beacba96038a502ff64d | label(s) Label Class c) b) a) 7.3 Integration Strategies 241 Combination strategy A combination strategy (also called combination scheme) is a technique used to combine the output of the individual classifiers. The most popular combination strategies at the abstract level are based on majority vote rules, which simply assign an input pattern to the most voted class (see Section 7.2). When two classifiers are combined, either a logical AND or a logical OR operator is typically used. When more than two classifiers are integrated, the AND/OR rules can be combined. For example, a biometric system may work on “fingerprint OR (face AND hand geometry)”; that is, it requires a user to present either a fingerprint or both face and hand geometry for recognition. Class set reduction, logistic regression, and Borda counts are the most commonly used approaches in combining classifiers based on the rank labels (Ho, Hull, and Srihari, 1994). In class set reduction, a subset of the classes is selected with the aim that the subset be as small as possible and still contain the true class. Multiple subsets from multiple modalities are typically combined using either a union or an intersection of the subsets. The logistic regression and Borda count methods are collectively called the class set reordering methods. The objective here is to derive a consensus ranking of the given classes such that the true class is ranked at the top. Rank labels are very useful for integration in an indexing/retrieval system. A biometric retrieval system typically outputs an ordered list of candidates (most likely matches). The top element of this ordered list is the most likely to be a correct match and the bottom of the list is the least likely match. The most popular combination schemes for combining confidence values from multiple modalities are sum, mean, median, product, minimum, and maximum rules. Kittler et al. (1998) have developed a theoretical framework in an attempt to understand the underlying mathematical basis of these popular schemes. Their experiments demonstrated that the sum or mean scheme typically performs very well in practice. A problem in using the sum rule is that the confidences (or scores) from different modalities should be normalized. This normalization typically involves mapping the confidence measures from different modalities into a common domain. For example, a biometric system may output a distance score (the lower the score, the more similar the patterns) whereas another may output a similarity score (the higher the score, the more similar the patterns) and thus the scores cannot be directly combined using the sum rule. In its simplest form, this normalization may only include inverting the sign of distance scores such that a higher score corresponds to a higher similarity. In a more complex form, the normalization may be non-linear which can be learned from training data by estimating distributions of confidence values from each modality. The scores are then translated and scaled to have zero mean, unit variance, and then remapped to a fixed interval of (0,1) using a hyperbolic tangent function. Note that it is tempting to parameterize the estimated distributions for normalization. However, such parameterization of distributions should be used with care, because the error rates of biometric systems are typically very small and a small error in estimating the tails of the distributions may result in a significant change in the error estimates (see Figure 7.3). Another common practice is to compute different scaling factors (weights) for each modality from training data, such that the accuracy of the combined classifier is maxi7 Multimodal Biometric Systems 242 mized. This weighted sum rule is expected to work better than the simple sum rule when the component classifiers have different strengths (i.e., different error rates). Figure 7.3. a) Genuine and impostor distributions for a fingerprint verification system (Jain et al., 2000) and a Normal approximation for the impostor distribution. Visually, the Normal approximation seems to be good, but causes significant decrease in performance compared to the non-parametric estimate as shown in the ROCs in b), where FMR is referred to as FAR (False Acceptance Rate) and (1-FNMR) as Genuine Acceptance Rate. ©Elsevier. Some schemes to combine multiple modalities in biometric systems have also been studied from a theoretical point of view. Through a theoretical analysis, Daugman (1999b) showed that if a strong biometric and a weak biometric are combined with an abstract level combination using either the AND or the OR voting rules, the performance of the combination will be worse than the better of the two individual biometrics. Hong, Jain, and Pankanti’s (1999) theoretical analysis that AND/OR voting strategies can improve performance only when certain conditions are satisfied confirmed Daugman’s findings. Their analysis further showed that a confidence level fusion is expected to significantly improve overall performance even in the case of combining a weak and a strong biometric. Kittler et al. (1998) introduced a sensitivity analysis to explain why the sum (or average) rule outperforms the other rules. They showed that the sum rule is less sensitive than the other similar rules (such as the “product” rule) to the error rates of individual classifiers in estimating posterior probabilities (confidence values). They claim that the sum rule is the most appropriate for combining different estimates of the same posterior probabilities (e.g., resulting from different classifier initializations). Prabhakar and Jain (2002) compared the sum and the product rules with the Neyman−Pearson combination scheme and showed that the product rule is worse than the sum rule when combining correlated features and both the sum rule and the product rules are inferior to the Neyman− Pearson combination scheme when combining weak and strong classifiers. 0 20 40 60 80 100 0 1 2 3 4 5 6 7 Normalized Matching Score Pe rc en ta ge ( % ) Imposter Genuine Nonparametric Imposter Distribution Normal Imposter Distribution Genuine Distribution 0 1 2 3 4 5 50 55 60 65 70 75 80 85 90 95 100 False Acceptance Rate (%) G en ui ne A cc ep ta nc e R at e (% ) Using Nonparametric Imposter Distribution Using Normal Imposter Distribution |
b4894f7d6264b94ded94181d54c7a0c773e3662b | Gait analysis has become recently a popular research field and been widely applied to clinical diagnosis of neurodegenerative diseases. Various low-cost sensor-based and vision-based systems are developed for capturing the hip and knee joint angles. However, the performances of these systems have not been validated and compared between each other. The purpose of this study is to set up an experiment and compare the performances of a sensor-based system with multiple inertial measurement units (IMUs), a vision-based gait analysis system with marker detection, and a markerless vision-based system on capturing the hip and knee joint angles during normal walking. The obtained measurements were validated with the data acquired from goniometers as ground truth measurement. The results indicate that the IMUs-based sensor system gives excellent performance with small errors, while vision systems produce acceptable results with slightly larger errors. |
0e78b20b27d27261f9ae088eb13201f2d5b185bd | Algorithms for feature selection fall into two broad categories: wrappers that use the learning algorithm itself to evaluate the usefulness of features and filters that evaluate features according to heuristics based on general characteristics of the data. For application to large databases, filters have proven to be more practical than wrappers because they are much faster. However, most existing filter algorithms only work with discrete classification problems. This paper describes a fast, correlation-based filter algorithm that can be applied to continuous and discrete problems. The algorithm often outperforms the well-known ReliefF attribute estimator when used as a preprocessing step for naive Bayes, instance-based learning, decision trees, locally weighted regression, and model trees. It performs more feature selection than ReliefF does—reducing the data dimensionality by fifty percent in most cases. Also, decision and model trees built from the preprocessed data are often significantly smaller. |
1b65af0b2847cf6edb1461eda659f08be27bc76d | We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients hat are exactly 0 and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described. |
3a8aa4cc6142d433ff55bea8a0cb980103ea15e9 | |
75cbc0eec23375df69de6c64e2f48689dde417c5 | With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers. |
aa358f4a0578234e301a305d8c5de8d859083a4c | This paper presents a novel visual representation, called orderlets, for real-time human action recognition with depth sensors. An orderlet is a middle level feature that captures the ordinal pattern among a group of low level features. For skeletons, an orderlet captures specific spatial relationship among a group of joints. For a depth map, an orderlet characterizes a comparative relationship of the shape information among a group of subregions. The orderlet representation has two nice properties. First, it is insensitive to small noise since an orderlet only depends on the comparative relationship among individual features. Second, it is a frame-level representation thus suitable for real-time online action recognition. Experimental results demonstrate its superior performance on online action recognition and cross-environment action recognition. |
30f1ea3b4194dba7f957fd6bf81bcaf12dca6ff8 | Incremental parsing techniques such as shift-reduce have gained popularity thanks to their efficiency, but there remains a major problem: the search is greedyand only explores a tiny fraction of the whole space (even with beam search) as opposed to dynamic programming. We show that, surprisingly, dynamic programming is in fact possible for many shift-reduce parsers, by merging “equivalent” stacks based on feature values. Empirically, our algorithm yields up to a five-fold speedup over a state-of-the-art shift-reduce dependency parser with no loss in accuracy. Better search also leads to better learning, and our final parser outperforms all previously reported dependency parsers for English and Chinese, yet is much faster. |
422d9b1a05bc33fcca4b9aa9381f46804c6132fd | Some queries cannot be answered by machines only. Processing such queries requires human input for providing information that is missing from the database, for performing computationally difficult functions, and for matching, ranking, or aggregating results based on fuzzy criteria. CrowdDB uses human input via crowdsourcing to process queries that neither database systems nor search engines can adequately answer. It uses SQL both as a language for posing complex queries and as a way to model data. While CrowdDB leverages many aspects of traditional database systems, there are also important differences. Conceptually, a major change is that the traditional closed-world assumption for query processing does not hold for human input. From an implementation perspective, human-oriented query operators are needed to solicit, integrate and cleanse crowdsourced data. Furthermore, performance and cost depend on a number of new factors including worker affinity, training, fatigue, motivation and location. We describe the design of CrowdDB, report on an initial set of experiments using Amazon Mechanical Turk, and outline important avenues for future work in the development of crowdsourced query processing systems. |
edc2e4e6308d7dfce586cb8a4441c704f8f8d41b | In this paper, we present two new communication-efficient methods for distributed minimization of an average of functions. The first algorithm is an inexact variant of the DANE algorithm [20] that allows any local algorithm to return an approximate solution to a local subproblem. We show that such a strategy does not affect the theoretical guarantees of DANE significantly. In fact, our approach can be viewed as a robustification strategy since the method is substantially better behaved than DANE on data partition arising in practice. It is well known that DANE algorithm does not match the communication complexity lower bounds. To bridge this gap, we propose an accelerated variant of the first method, called AIDE, that not only matches the communication lower bounds but can also be implemented using a purely first-order oracle. Our empirical results show that AIDE is superior to other communication efficient algorithms in settings that naturally arise in machine learning applications. |
c677166592b505b80a487fb88ac5a6996fc47d71 | The paper reviews the past and present results in the area of decentralized control of large-scale complex systems. An emphasis is laid on decentralization, decomposition, and robustness. These methodologies serve as effective tools to overcome specific difficulties arising in largescale complex systems such as high dimensionality, information structure constraints, uncertainty, and delays. Several prospective topics for future research are introduced in this contents. The overview is focused on recent decomposition approaches in interconnected dynamic systems due to their potential in providing the extension of decentralized control into networked control systems. # 2008 Elsevier Ltd. All rights reserved. |
0debd1c0b73fc79dc7a64431b8b6a1fe21dcd9f7 | Feature selection can improve classification accuracy and decrease the computational complexity of classification. Data features in intrusion detection systems (IDS) always present the problem of imbalanced classification in which some classifications only have a few instances while others have many instances. This imbalance can obviously limit classification efficiency, but few effort s have been made to address it. In this paper, a scheme for the many-objective problem is proposed for feature selection in IDS, which uses two strategies, namely, a special domination method and a predefined multiple targeted search, for population evolution. It can differentiate traffic not only between normal and abnormal but also by abnormality type. Based on our scheme, NSGA-III is used to obtain an adequate feature subset with good performance. An improved many-objective optimization algorithm (I-NSGA-III) is further proposed using a novel niche preservation procedure. It consists of a bias-selection process that selects the individual with the fewest selected features and a fit-selection process that selects the individual with the maximum sum weight of its objectives. Experimental results show that I-NSGA-III can alleviate the imbalance problem with higher classification accuracy for classes having fewer instances. Moreover, it can achieve both higher classification accuracy and lower computational complexity. © 2016 Published by Elsevier B.V. |
5a4a53339068eebd1544b9f430098f2f132f641b | Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner. A number of recent efforts have focused on learning representations that disentangle statistically independent axes of variation, often by introducing suitable modifications of the objective function. We synthesize this growing body of literature by formulating a generalization of the evidence lower bound that explicitly represents the trade-offs between sparsity of the latent code, bijectivity of representations, and coverage of the support of the empirical data distribution. Our objective is also suitable to learning hierarchical representations that disentangle blocks of variables whilst allowing for some degree of correlations within blocks. Experiments on a range of datasets demonstrate that learned representations contain interpretable features, are able to learn discrete attributes, and generalize to unseen combinations of factors. |
4b19be501b279b7d80d94b2d9d986bf4f8ab4ede | |
8ba965f138c1178aef09da3781765e300c325f3d | Electromyogram or EMG signal is a very small signal; it requires a system to enhance for display purpose or for further analysis process. This paper presents the development of low cost physiotherapy EMG signal acquisition system with two channel input. In the acquisition system, both input signals are amplified with a differential amplifier and undergo signal pre-processing to obtain the linear envelope of EMG signal. Obtained EMG signal is then digitalized and sent to the computer to be plotted. |
01413e1fc981a8c041dc236dcee64790e2239a36 | A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems. A lot of recent effort has been devoted to developing distributed algorithms for these problems. However, these results suffer from high number of rounds, suboptimal approximation ratios, or both. We develop a framework for bringing existing algorithms in the sequential setting to the distributed setting, achieving near optimal approximation ratios for many settings in only a constant number of MapReduce rounds. Our techniques also give a fast sequential algorithm for non-monotone maximization subject to a matroid constraint. |
0451c923703472b6c20ff11185001f24b76c48e3 | Emerging applications for networked and cooperative robots motivate the study of motion coordination for groups of agents. For example, it is envisioned that groups of agents will perform a variety of useful tasks including surveillance, exploration, and environmental monitoring. This paper deals with basic interactions among mobile agents such as “move away from the closest other agent” or “move toward the furthest vertex of your own Voronoi polygon.” These simple interactions amount to distributed dynamical systems because their implementation requires only minimal information about neighboring agents. We characterize the close relationship between these distributed dynamical systems and the disk-covering and sphere-packing cost functions from geometric optimization. Our main results are: (i) we characterize the smoothness properties of these geometric cost functions, (ii) we show that the interaction laws are variations of the nonsmooth gradient of the cost functions, and (iii) we establish various asymptotic convergence properties of the laws. The technical approach relies on concepts from computational geometry, nonsmooth analysis, and nonsmooth stability theory. |
0a37a647a2f8464379a1fe327f93561c90d91405 | Recent developments have clari ed the process of generating partially ordered, partially speci ed sequences of actions whose execution will achive an agent's goal. This paper summarizes a progression of least commitment planners, starting with one that handles the simple strips representation, and ending with one that manages actions with disjunctive precondition, conditional e ects and universal quanti cation over dynamic universes. Along the way we explain how Chapman's formulation of the Modal Truth Criterion is misleading and why his NP-completeness result for reasoning about plans with conditional e ects does not apply to our planner. 1 I thank Franz Amador, Tony Barrett, Darren Cronquist, Denise Draper, Ernie Davis, Oren Etzioni, Nort Fowler, Rao Kambhampati, Craig Knoblock, Nick Kushmerick, Neal Lesh, Karen Lochbaum, Drew McDermott, Ramesh Patil, Kari Pulli, Ying Sun, Austin Tate and Mike Williamson for helpful comments, but retain sole responsibility for errors. This research was funded in part by O ce of Naval Research Grant 90-J-1904 and by National Science Foundation Grant IRI-8957302 |
63f97f3b4808baeb3b16b68fcfdb0c786868baba | The whole design cycle, including design, fabrication and characterization of a broadband double-ridged horn antenna equipped with a parabolic lens and a waveguide adapter is presented in this paper. A major goal of the presented work was to obtain high directivity with a flat phase characteristic within the main radiation lobe at an 18–40 GHz frequency range, so that the antenna can be applicable in a freespace material characterization setup. |
c5151f18c2499f1d95522536f167f2fcf75f647f | In the next generation heterogeneous wireless networks, a user with a multi-interface terminal may have a network access from different service providers using various technologies. It is believed that handover decision is based on multiple criteria as well as user preference. Various approaches have been proposed to solve the handover decision problem, but the choice of decision method appears to be arbitrary and some of the methods even give disputable results. In this paper, a new handover criteria is introduced along with a new handover decision strategy. In addition, handover decision is identified us a fuzzy multiple attribute decision making (MADM) problem, and fuzzy logic is applied to deal with the imprecise information of some criteria and user preference. After a systematic analysis of various fuzzy MADM methods, a feasible approach is presented. In the end, examples are provided illustrating the proposed methods and the sensitivity of the methods is also analysed. |
e321ab5d7a98e18253ed7874946a229a10e40f26 | The performance and accuracy of classifier are affected by the result of feature selection directly. Based on the one-class F-Score feature selection and the improved F-Score feature selection and genetic algorithm, combined with machine learning methods like the K nearest neighbor, support vector machine, random forest, naive Bayes, a hybrid feature selection algorithm is proposed to process the two classification unbalanced data problem and multi classification problem. Compared with the traditional machine learning algorithm, it can search in wider feature space and promote classifier to deal with the characteristics of unbalanced data sets according to heuristic rules, which can handle the problem of unbalanced classification better. The experiment results show that the area under receiver operating characteristic curve for two classifications and the accuracy rate for multi classification problem have been improved compared with other models |
08fddf1865e48a1adc21d4875396a754711f0a28 | Machine learning for text classification is the cor nerstone of document categorization, news filtering, document routing, and personalization. In text domains, effective feature selection is essential to make the learning task efficient and m ore accurate. This paper presents an empirical comparison of twelve feature selection methods (e.g . Information Gain) evaluated on a benchmark of 229 text classification problem instances that w ere gathered from Reuters, TREC, OHSUMED, etc. The results are analyzed from multiple goal p ers ectives—accuracy, F-measure, precision, and recall—since each is appropriate in different si tuat ons. The results reveal that a new feature selection me tric we call ‘Bi-Normal Separation’ (BNS), outperformed the others by a substantial margin in most situations. This margin widened in tasks with high class skew, which is rampant in text clas sification problems and is particularly challenging for induction algorithms. A new evaluation methodology is offered that focus es on the needs of the data mining practitioner faced with a single dataset who seeks to choose one (or a pair of) metrics that are most likely to yield the best performance. From this perspect iv , BNS was the top single choice for all goals except precision, for which Information Gain yielded the best result most often. This analysis also revealed, for example, that Informati on Gain and Chi-Squared have correlated failures, and so they work poorly together. When c hoosing optimal pairs of metrics for each of the four performance goals, BNS is consistently a membe r of the pair—e.g., for greatest recall, the pair BNS + F1-measure yielded the best performance on the greatest number of tasks by a considerable margin. |
32352a889360e365fa242ad3040ccd6c54131d47 | |
7857cdf46d312af4bb8854bd127e5c0b4268f90c | Dynamic response of boost power factor correction (PFC) converter operating in continuous-conduction mode (CCM) is heavily influenced by low-bandwidth of voltage control loop. A novel tri-state boost PFC converter operating in pseudo-continuous-conduction mode (PCCM) is proposed in this paper. An additional degree of control-freedom introduced by a freewheeling switching control interval helps to achieve PFC control. A simple and fast voltage control loop can be used to maintain a constant output voltage. Furthermore, compared with boost PFC converter operating in conventional discontinuous-conduction mode (DCM), boost PFC converter operating in PCCM demonstrates greatly improved current handling capability with reduced current and voltage ripples. Analytical and simulation results of the tri-state boost PFC converter have been presented and compared with those of the boost PFC converter operating in conventional CCM and DCM. Simulation results show excellent dynamic performance of the tri-state boost PFC converter. |
8629cebb7c574adf40d71d41389f340804c8c81f | This article summarizes the major developments in the history of efforts to use fingerprint patterns to identify individuals, from the earliest fingerprint classification systems of Vucetich and Henry in the 1890s through the advent of automated fingerprint identification. By chronicling the history of “manual” systems for recording storing, matching, and retrieving fingerprints, the article puts advances in automatic fingerprint recognition in historical context and highlights their historical and social significance. |
7d9089cbe958da21cbd943bdbcb996f4499e701b | Document level sentiment classification remains a challenge: encoding the intrinsic relations between sentences in the semantic meaning of a document. To address this, we introduce a neural network model to learn vector-based document representation in a unified, bottom-up fashion. The model first learns sentence representation with convolutional neural network or long short-term memory. Afterwards, semantics of sentences and their relations are adaptively encoded in document representation with gated recurrent neural network. We conduct document level sentiment classification on four large-scale review datasets from IMDB and Yelp Dataset Challenge. Experimental results show that: (1) our neural model shows superior performances over several state-of-the-art algorithms; (2) gated recurrent neural network dramatically outperforms standard recurrent neural network in document modeling for sentiment classification.1 |
b294b61f0b755383072ab332061f45305e0c12a1 | We present a fast method for re-purposing existing semantic word vectors to improve performance in a supervised task. Recently, with an increase in computing resources, it became possible to learn rich word embeddings from massive amounts of unlabeled data. However, some methods take days or weeks to learn good embeddings, and some are notoriously difficult to train. We propose a method that takes as input an existing embedding, some labeled data, and produces an embedding in the same space, but with a better predictive performance in the supervised task. We show improvement on the task of sentiment classification with respect to several baselines, and observe that the approach is most useful when the training set is sufficiently small. |
4a27709545cfa225d8983fb4df8061fb205b9116 | We propose a data mining (DM) approach to predict the success of telemarketing calls for selling bank long-term deposits. A Portuguese retail bank was addressed, with data collected from 2008 to 2013, thus including the effects of the recent financial crisis. We analyzed a large set of 150 features related with bank client, product and social-economic attributes. A semi-automatic feature selection was explored in the modeling phase, performed with the data prior to July 2012 and that allowed to select a reduced set of 22 features. We also compared four DM models: logistic regression, decision trees (DT), neural network (NN) and support vector machine. Using two metrics, area of the receiver operating characteristic curve (AUC) and area of the LIFT cumulative curve (ALIFT), the four models were tested on an evaluation phase, using the most recent data (after July 2012) and a rolling windows scheme. The NN presented the best results (AUC=0.8 and ALIFT=0.7), allowing to reach 79% of the subscribers by selecting the half better classified clients. Also, two knowledge extraction methods, a sensitivity analysis and a DT, were applied to the NN model and revealed several key attributes (e.g., Euribor rate, direction of the call and bank agent experience). Such knowledge extraction confirmed the obtained model as credible and valuable for telemarketing campaign managers. Preprint submitted to Elsevier 19 February 2014 |
138c86b9283e4f26ff1583acdf4e51a5f88ccad1 | Interpretation of images and videos containing humans interacting with different objects is a daunting task. It involves understanding scene or event, analyzing human movements, recognizing manipulable objects, and observing the effect of the human movement on those objects. While each of these perceptual tasks can be conducted independently, recognition rate improves when interactions between them are considered. Motivated by psychological studies of human perception, we present a Bayesian approach which integrates various perceptual tasks involved in understanding human-object interactions. Previous approaches to object and action recognition rely on static shape or appearance feature matching and motion analysis, respectively. Our approach goes beyond these traditional approaches and applies spatial and functional constraints on each of the perceptual elements for coherent semantic interpretation. Such constraints allow us to recognize objects and actions when the appearances are not discriminative enough. We also demonstrate the use of such constraints in recognition of actions from static images without using any motion information. |
321f14b35975b3800de5e66da64dee96071603d9 | Feature selection is an important component of many machine learning applications. Especially in many bioinformatics tasks, efficient and robust feature selection methods are desired to extract meaningful features and eliminate noisy ones. In this paper, we propose a new robust feature selection method with emphasizing joint `2,1-norm minimization on both loss function and regularization. The `2,1-norm based loss function is robust to outliers in data points and the `2,1norm regularization selects features across all data points with joint sparsity. An efficient algorithm is introduced with proved convergence. Our regression based objective makes the feature selection process more efficient. Our method has been applied into both genomic and proteomic biomarkers discovery. Extensive empirical studies are performed on six data sets to demonstrate the performance of our feature selection method. |
9b505dd5459fb28f0136d3c63793b600042e6a94 | We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency. |
0ef550dacb89fb655f252e5b17dbd5d643eb5ac1 | We recorded electrical activity from 532 neurons in the rostral part of inferior area 6 (area F5) of two macaque monkeys. Previous data had shown that neurons of this area discharge during goal-directed hand and mouth movements. We describe here the properties of a newly discovered set of F5 neurons ("mirror neurons', n = 92) all of which became active both when the monkey performed a given action and when it observed a similar action performed by the experimenter. Mirror neurons, in order to be visually triggered, required an interaction between the agent of the action and the object of it. The sight of the agent alone or of the object alone (three-dimensional objects, food) were ineffective. Hand and the mouth were by far the most effective agents. The actions most represented among those activating mirror neurons were grasping, manipulating and placing. In most mirror neurons (92%) there was a clear relation between the visual action they responded to and the motor response they coded. In approximately 30% of mirror neurons the congruence was very strict and the effective observed and executed actions corresponded both in terms of general action (e.g. grasping) and in terms of the way in which that action was executed (e.g. precision grip). We conclude by proposing that mirror neurons form a system for matching observation and execution of motor actions. We discuss the possible role of this system in action recognition and, given the proposed homology between F5 and human Brocca's region, we posit that a matching system, similar to that of mirror neurons exists in humans and could be involved in recognition of actions as well as phonetic gestures. |
15b2c44b3868a1055850846161aaca59083e0529 | We consider the general problem of learning from labeled and unlabeled data, which is often called semi-supervised learning or transductive inference. A principled approach to semi-supervised learning is to design a classifying function which is sufficiently smooth with respect to the intrinsic structure collectively revealed by known labeled and unlabeled points. We present a simple algorithm to obtain such a smooth solution. Our method yields encouraging experimental results on a number of classification problems and demonstrates effective use of unlabeled data. |
50886d25ddd5d0d1982ed94f90caa67639fcf1a1 | |
4c2bbcb3e897e927cd390517b2036b0b9123953c | Available online 18 July 2010 Information overload is a severe problem for human operators of large-scale control systems as, for example, encountered in the domain of road traffic management. Operators of such systems are at risk to lack situation awareness, because existing systems focus on themere presentation of the available information on graphical user interfaces—thus endangering the timely and correct identification, resolution, and prevention of critical situations. In recent years, ontology-based approaches to situation awareness featuring a semantically richer knowledge model have emerged. However, current approaches are either highly domain-specific or have, in case they are domain-independent, shortcomings regarding their reusability. In this paper, we present our experience gained from the development of BeAware!, a framework for ontology-driven information systems aiming at increasing anoperator's situation awareness. In contrast to existing domain-independent approaches, BeAware!'s ontology introduces the concept of spatio-temporal primitive relations betweenobserved real-world objects thereby improving the reusability of the framework. To show its applicability, a prototype of BeAware! has been implemented in thedomain of road trafficmanagement. An overviewof this prototype and lessons learned for the development of ontology-driven information systems complete our contribution. © 2010 Elsevier B.V. All rights reserved. |
abf9ee52b29f109f5dbf6423fbc0d898df802971 | |
a31e3b340f448fe0a276b659a951e39160a350dd | User satisfaction with general information system (IS) and certain types of information technology (IT) applications has been thoroughly studied in IS research. With the widespread and increasing use of portal technology, however, there is a need to conduct a user satisfaction study on portal use -in particular, the business-to-employee (b2e) portal. In this paper, we propose a conceptual model for determining b2e portal user satisfaction, which has been derived from an extensive literature review of user satisfaction scales and the b2e portal. Nine dimensions of b2e portal user satisfaction are identified and modeled: information content, ease of use, convenience of access, timeliness, efficiency, security, confidentiality, communication, and layout. |
0848827ba30956e29d7d126d0a05e51660094ebe | The Internet of things (IoT) is revolutionizing the management and control of automated systems leading to a paradigm shift in areas such as smart homes, smart cities, health care, transportation, etc. The IoT technology is also envisioned to play an important role in improving the effectiveness of military operations in battlefields. The interconnection of combat equipment and other battlefield resources for coordinated automated decisions is referred to as the Internet of battlefield things (IoBT). IoBT networks are significantly different from traditional IoT networks due to the battlefield specific challenges such as the absence of communication infrastructure, and the susceptibility of devices to cyber and physical attacks. The combat efficiency and coordinated decision-making in war scenarios depends highly on real-time data collection, which in turn relies on the connectivity of the network and the information dissemination in the presence of adversaries. This work aims to build the theoretical foundations of designing secure and reconfigurable IoBT networks. Leveraging the theories of stochastic geometry and mathematical epidemiology, we develop an integrated framework to study the communication of mission-critical data among different types of network devices and consequently design the network in a cost effective manner. |
32e876a9420f7c58a3c55ec703416c7f57a54f4c | For most researchers in the ever growing fields of probabilistic graphical models, belief networks, causal influence and probabilistic inference, ACM Turing award winner Dr. Pearl and his seminary papers on causality are well-known and acknowledged. Representation and determination of Causality, the relationship between an event (the cause) and a second event (the effect), where the second event is understood as a consequence of the first, is a challenging problem. Over the years, Dr. pearl has written significantly on both Art and Science of Cause and Effect. In this book on "Causality: Models, Reasoning and Inference", the inventor of Bayesian belief networks discusses and elaborates on his earlier workings including but not limited to Reasoning with Cause and Effect, Causal inference in statistics, Simpson's paradox, Causal Diagrams for Empirical Research, Robustness of Causal Claims, Causes and explanations, and Probabilities of causation Bounds and identification. |
a04145f1ca06c61f5985ab22a2346b788f343392 | A large number of studies have been conducted during the last decade and a half attempting to identify those factors that contribute to information systems success. However, the dependent variable in these studies—I/S success —has been an elusive one to define. Different researchers have addressed different aspects of success, making comparisons difficult and the prospect of building a cumulative tradition for I/S research similarly elusive. To organize this diverse research, as well as to present a more integrated view of the concept of I/S success, a comprehensive taxonomy is introduced. This taxonomy posits six major dimensions or categories of I/S success—SYSTEM OUALITY, INFORMATION QUALITY, USE, USER SATISFACTION, INDIVIDUAL IMPACT, and ORGANIZATIONAL IMPACT. Using these dimensions, both conceptual and empirical studies are then reviewed (a total of 180 articles are cited) and organized according to the dimensions of the taxonomy. Finally, the many aspects of I/S success are drawn together into a descriptive model and its implications for future I/S research are discussed. |
a99f1f749481e44abab0ba9a8b7c1d3572a2e465 | |
5c8bb027eb65b6d250a22e9b6db22853a552ac81 | |
3913d2e0a51657a5fe11305b1bcc8bf3624471c0 | Representation learning is a fundamental problem in natural language processing. This paper studies how to learn a structured representation for text classification. Unlike most existing representation models that either use no structure or rely on pre-specified structures, we propose a reinforcement learning (RL) method to learn sentence representation by discovering optimized structures automatically. We demonstrate two attempts to build structured representation: Information Distilled LSTM (ID-LSTM) and Hierarchically Structured LSTM (HS-LSTM). ID-LSTM selects only important, taskrelevant words, and HS-LSTM discovers phrase structures in a sentence. Structure discovery in the two representation models is formulated as a sequential decision problem: current decision of structure discovery affects following decisions, which can be addressed by policy gradient RL. Results show that our method can learn task-friendly representations by identifying important words or task-relevant structures without explicit structure annotations, and thus yields competitive performance. |
599ebeef9c9d92224bc5969f3e8e8c45bff3b072 | The explosive growth of the world-wide-web and the emergence of e-commerce has led to the development of recommender systems---a personalized information filtering technology used to identify a set of items that will be of interest to a certain user. User-based collaborative filtering is the most successful technology for building recommender systems to date and is extensively used in many commercial recommender systems. Unfortunately, the computational complexity of these methods grows linearly with the number of customers, which in typical commercial applications can be several millions. To address these scalability concerns model-based recommendation techniques have been developed. These techniques analyze the user--item matrix to discover relations between the different items and use these relations to compute the list of recommendations.In this article, we present one such class of model-based recommendation algorithms that first determines the similarities between the various items and then uses them to identify the set of items to be recommended. The key steps in this class of algorithms are (i) the method used to compute the similarity between the items, and (ii) the method used to combine these similarities in order to compute the similarity between a basket of items and a candidate recommender item. Our experimental evaluation on eight real datasets shows that these item-based algorithms are up to two orders of magnitude faster than the traditional user-neighborhood based recommender systems and provide recommendations with comparable or better quality. |
01a8909330cb5d4cc37ef50d03467b1974d6c9cf | This overview presents computational algorithms for generating 3D object grasps with autonomous multi-fingered robotic hands. Robotic grasping has been an active research subject for decades, and a great deal of effort has been spent on grasp synthesis algorithms. Existing papers focus on reviewing the mechanics of grasping and the finger-object contact interactions [7] or robot hand design and their control [1]. Robot grasp synthesis algorithms have been reviewed in [63], but since then an important progress has been made toward applying learning techniques to the grasping problem. This overview focuses on analytical as well as empirical grasp synthesis approaches. |
4e03cadf3095f8779eaf878f0594e56ad88788e2 | |
a63c3f53584fd50e27ac0f2dcbe28c7361b5adff | Silicon offers a new set of possibilities and challenges for RF, microwave, and millimeter-wave applications. While the high cutoff frequencies of the SiGe heterojunction bipolar transistors and the ever-shrinking feature sizes of MOSFETs hold a lot of promise, new design techniques need to be devised to deal with the realities of these technologies, such as low breakdown voltages, lossy substrates, low-Q passives, long interconnect parasitics, and high-frequency coupling issues. As an example of complete system integration in silicon, this paper presents the first fully integrated 24-GHz eight-element phased array receiver in 0.18-/spl mu/m silicon-germanium and the first fully integrated 24-GHz four-element phased array transmitter with integrated power amplifiers in 0.18-/spl mu/m CMOS. The transmitter and receiver are capable of beam forming and can be used for communication, ranging, positioning, and sensing applications. |
64da24aad2e99514ab26d093c19cebec07350099 | CubeSat platforms grow increasingly popular in commercial ventures as alternative solutions for global Internet networks deep space exploration, and aerospace research endeavors. Many technology companies and system engineers plan to implement small satellite systems as part of global Low Earth Orbit (LEO) inter-satellite constellations. High performing low cost hardware is of key importance in driving these efforts. This paper presents the heterodyne architecture and performance of Ka-Band Integrated Transmitter Assembly (ITA) Module, which could be implemented in nano/microsatellite or other satellite systems as a low-cost solution for high data rate space communication systems. The module converts a 0.9 to 1.1 GHz IF input signal to deliver linear transmission of +29 dBm at 26.7 to 26.9 GHz frequency range with a built-in phase locked oscillator, integrated transmitter, polarizer, and lens corrected antenna. |
0b44fcbeea9415d400c5f5789d6b892b6f98daff | In this paper, we review our experience with constructing one such large annotated corpus--the Penn Treebank, a corpus consisting of over 4.5 million words of American English. During the first three-year phase of the Penn Treebank Project (1989-1992), this corpus has been annotated for part-of-speech (POS) information. In addition, over half of it has been annotated for skeletal syntactic structure. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-93-87. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/237 Building A Large Annotated Corpus of English: The Penn Treebank MS-CIS-93-87 LINC LAB 260 Mitchell P. Marcus Beatrice Santorini Mary Ann Marcinkiewicz University of Pennsylvania School of Engineering and Applied Science Computer and Information Science Department Philadelphia, PA 19104-6389 |
7c70c29644ff1d6dd75a8a4dd0556fb8cb13549b | We present a novel technique to fabricate conformal and pliable substrates for microwave applications including systems-on-package. The produced materials are fabricated by combining ceramic powders with polymers to generate a high-contrast substrate that is concurrently pliable (bendable). Several such polymer-ceramic substrates are fabricated and used to examine the performance of a patch antenna and a coupled line filter. This paper presents the substrate mixing method while measurements are given to evaluate the loss performance of the substrates. Overall, the fabricated composites lead to flexible substrates with a permittivity of up to epsivr=20 and sufficiently low loss |
85947d646623ef7ed96dfa8b0eb705d53ccb4efe | Network forensics is the science that deals with capture, recording, and analysis of network traffic for detecting intrusions and investigating them. This paper makes an exhaustive survey of various network forensic frameworks proposed till date. A generic process model for network forensics is proposed which is built on various existing models of digital forensics. Definition, categorization and motivation for network forensics are clearly stated. The functionality of various Network Forensic Analysis Tools (NFATs) and network security monitoring tools, available for forensics examiners is discussed. The specific research gaps existing in implementation frameworks, process models and analysis tools are identified and major challenges are highlighted. The significance of this work is that it presents an overview on network forensics covering tools, process models and framework implementations, which will be very much useful for security practitioners and researchers in exploring this upcoming and young discipline. a 2010 Elsevier Ltd. All rights reserved. |
7d98dce77cce2d0963a3b6566f5c733ad4343ce4 | This study extends Davis' (1989) TAM model and Straub's (1994) SPIR addendum by adding gender to an IT diffusion model. The Technology Acceptance Model (TAM) has been widely studied in IS research as an explanation of the use of information systems across IS-types and nationalities. While this line of research has found significant cross-cultural differences, it has ignored the effects of gender, even though in socio-linguistic research, gender is a fundamental aspect of culture. Indeed, sociolinguistic research that has shown that men tend to focus discourse on hierarchy and independence while women focus on intimacy and solidarity. This literature provides a solid grounding for conceptual extensions to the IT diffusion research and the Technology Acceptance Model. Testing gender differences that might relate to beliefs and use of computer-based media, the present study sampled 392 female and male responses via a cross-sectional survey instrument. The sample drew from comparable groups of knowledge workers using E-mail systems in the airline industry in North America, Asia, and Europe. Study findings indicate that women and men differ in their perceptions but not use of E-mail. These findings suggest that researchers should include gender in IT diffusion models along with other cultural effects. Managers and co-workers, moreover, need to realize that the same mode of communication may be perceived differently by the sexes, suggesting that more favorable communications environments might be created, environments that take into account not only organizational contextual factors, but also the gender of users. The creation of these environments involves not only the actual deployment of communication media, but also organizational training on communications media. |
2599131a4bc2fa957338732a37c744cfe3e17b24 | A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms. |
68c29b7bf1811f941040bba6c611753b8d756310 | The modern automobile is controlled by networked computers. The security of these networks was historically of little concern, but researchers have in recent years demonstrated their many vulnerabilities to attack. As part of a defence against these attacks, we evaluate an anomaly detector for the automotive controller area network (CAN) bus. The majority of attacks are based on inserting extra packets onto the network. But most normal packets arrive at a strict frequency. This motivates an anomaly detector that compares current and historical packet timing. We present an algorithm that measures inter-packet timing over a sliding window. The average times are compared to historical averages to yield an anomaly signal. We evaluate this approach over a range of insertion frequencies and demonstrate the limits of its effectiveness. We also show how a similar measure of the data contents of packets is not effective for identifying anomalies. Finally we show how a one-class support vector machine can use the same information to detect anomalies with high confidence. |
c43d8a3d36973e3b830684e80a035bbb6856bcf7 | Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The lowresolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods. |
77768638f4f400272b6e5970596b127663471538 | BACKGROUND
The scoping review has become an increasingly popular approach for synthesizing research evidence. It is a relatively new approach for which a universal study definition or definitive procedure has not been established. The purpose of this scoping review was to provide an overview of scoping reviews in the literature.
METHODS
A scoping review was conducted using the Arksey and O'Malley framework. A search was conducted in four bibliographic databases and the gray literature to identify scoping review studies. Review selection and characterization were performed by two independent reviewers using pretested forms.
RESULTS
The search identified 344 scoping reviews published from 1999 to October 2012. The reviews varied in terms of purpose, methodology, and detail of reporting. Nearly three-quarter of reviews (74.1%) addressed a health topic. Study completion times varied from 2 weeks to 20 months, and 51% utilized a published methodological framework. Quality assessment of included studies was infrequently performed (22.38%).
CONCLUSIONS
Scoping reviews are a relatively new but increasingly common approach for mapping broad topics. Because of variability in their conduct, there is a need for their methodological standardization to ensure the utility and strength of evidence. |
4c5815796c29d44c940830118339e276f741d34a | Robot assistants and professional coworkers are becoming a commodity in domestic and industrial settings. In order to enable robots to share their workspace with humans and physically interact with them, fast and reliable handling of possible collisions on the entire robot structure is needed, along with control strategies for safe robot reaction. The primary motivation is the prevention or limitation of possible human injury due to physical contacts. In this survey paper, based on our early work on the subject, we review, extend, compare, and evaluate experimentally model-based algorithms for real-time collision detection, isolation, and identification that use only proprioceptive sensors. This covers the context-independent phases of the collision event pipeline for robots interacting with the environment, as in physical human–robot interaction or manipulation tasks. The problem is addressed for rigid robots first and then extended to the presence of joint/transmission flexibility. The basic physically motivated solution has already been applied to numerous robotic systems worldwide, ranging from manipulators and humanoids to flying robots, and even to commercial products. |
4e3a22ed94c260b9143eee9fdf6d5d6e892ecd8f | |
e18fa8c8f402c483b2c3eaaa89192fe99e80abd5 | There are numerous studies suggesting that published news stories have an important effect on the direction of the stock market, its volatility, the volume of trades, and the value of individual stocks mentioned in the news. There is even some published research suggesting that automated sentiment analysis of news documents, quarterly reports, blogs and/or twitter data can be productively used as part of a trading strategy. This paper presents just such a family of trading strategies, and then uses this application to re-examine some of the tacit assumptions behind how sentiment analyzers are generally evaluated, in spite of the contexts of their application. This discrepancy comes at a cost. |
050c6fa2ee4b3e0a076ef456b82b2a8121506060 | Despite the great progress achieved in recognizing objects as 2D bounding boxes in images, it is still very challenging to detect occluded objects and estimate the 3D properties of multiple objects from a single image. In this paper, we propose a novel object representation, 3D Voxel Pattern (3DVP), that jointly encodes the key properties of objects including appearance, 3D shape, viewpoint, occlusion and truncation. We discover 3DVPs in a data-driven way, and train a bank of specialized detectors for a dictionary of 3DVPs. The 3DVP detectors are capable of detecting objects with specific visibility patterns and transferring the meta-data from the 3DVPs to the detected objects, such as 2D segmentation mask, 3D pose as well as occlusion or truncation boundaries. The transferred meta-data allows us to infer the occlusion relationship among objects, which in turn provides improved object recognition results. Experiments are conducted on the KITTI detection benchmark [17] and the outdoor-scene dataset [41]. We improve state-of-the-art results on car detection and pose estimation with notable margins (6% in difficult data of KITTI). We also verify the ability of our method in accurately segmenting objects from the background and localizing them in 3D. |
1a124ed5d7c739727ca60cf11008edafa9e3ecf2 | As the data-driven economy evolves, enterprises have come to realize a competitive advantage in being able to act on high volume, high velocity streams of data. Technologies such as distributed message queues and streaming processing platforms that can scale to thousands of data stream partitions on commodity hardware are a response. However, the programming API provided by these systems is often low-level, requiring substantial custom code that adds to the programmer learning curve and maintenance overhead. Additionally, these systems often lack SQL querying capabilities that have proven popular on Big Data systems like Hive, Impala or Presto. We define a minimal set of extensions to standard SQL for data stream querying and manipulation. These extensions are prototyped in SamzaSQL, a new tool for streaming SQL that compiles streaming SQL into physical plans that are executed on Samza, an open-source distributed stream processing framework. We compare the performance of streaming SQL queries against native Samza applications and discuss usability improvements. SamzaSQL is a part of the open source Apache Samza project and will be available for general use. |
b8ec319b1f5223508267b1d5b677c0796d25ac13 | In many real-world scenarios, labeled data for a specific machine learning task is costly to obtain. Semi-supervised training methods make use of abundantly available unlabeled data and a smaller number of labeled examples. We propose a new framework for semi-supervised training of deep neural networks inspired by learning in humans. Associations are made from embeddings of labeled samples to those of unlabeled ones and back. The optimization schedule encourages correct association cycles that end up at the same class from which the association was started and penalizes wrong associations ending at a different class. The implementation is easy to use and can be added to any existing end-to-end training setup. We demonstrate the capabilities of learning by association on several data sets and show that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data. In particular, for cases with few labeled data, our training scheme outperforms the current state of the art on SVHN. |
852c633882927affd1a951e81e6e30251bb40867 | Concurrently with the continuously developing radio frequency identification (RFID) technology, new types of tag antenna materials and structures are emerging to fulfill the requirements encountered within the new application areas. In this work, a radiation efficiency measurement method is developed and verified for passive ultra-high frequency (UHF) RFID dipole tag antennas. In addition, the measurement method is applied to measure the radiation efficiency of sewed dipole tag antennas for wearable body-centric wireless communication applications. The acquired information from measurements can be used to characterize tag antenna material structures losses and to further both improve and optimize tag antenna performance and reliability. |
833de6c09b38a679ed870ad3a7ccfafc8de010e1 | The estimation of the ego-vehicle's motion is a key capability for advanced driving assistant systems and mobile robot localization. The following paper presents a robust algorithm using radar sensors to instantly determine the complete 2D motion state of the ego-vehicle (longitudinal, lateral velocity and yaw rate). It evaluates the relative motion between at least two Doppler radar sensors and their received stationary reflections (targets). Based on the distribution of their radial velocities across the azimuth angle, non-stationary targets and clutter are excluded. The ego-motion and its corresponding covariance matrix are estimated. The algorithm does not require any preprocessing steps such as clustering or clutter suppression and does not contain any model assumptions. The sensors can be mounted at any position on the vehicle. A common field of view is not required, avoiding target association in space. As an additional benefit, all targets are instantly labeled as stationary or non-stationary. |
31918003360c352fb0750040d163f287894ab547 | Recently automotive embedded system has developed highly since the advent of smart car, electric card and so on. They have various value-added system for example IPA (Intelligent Parking Assistance), BSW (Blind Spot Warning), LDWS (Lane Departure warning System), LKS(Lane Keeping System)-these are ADAS (Advanced Driver Assistance Systems). AUTOSAR (AUTomotive Open System Architecture) is the most notable industrial standard for developing automotive embedded software. AUTOSAR is a partnership of automotive manufacturers and suppliers working together to develop and establish an open industry standard for automotive E/E architectures. In this paper, we will introduce AUTOSAR briefly and demonstrate the result of automotive software LDWS (Lane Detection & Warning System) development. |
36bb4352891209ba0a7df150c74cd4db6d603ca5 | Example learning-based single image super-resolution (SR) is a promising method for reconstructing a high-resolution (HR) image from a single-input low-resolution (LR) image. Lots of popular SR approaches are more likely either time-or space-intensive, which limit their practical applications. Hence, some research has focused on a subspace view and delivered state-of-the-art results. In this paper, we utilize an effective way with mixture prior models to transform the large nonlinear feature space of LR images into a group of linear subspaces in the training phase. In particular, we first partition image patches into several groups by a novel selective patch processing method based on difference curvature of LR patches, and then learning the mixture prior models in each group. Moreover, different prior distributions have various effectiveness in SR, and in this case, we find that student-t prior shows stronger performance than the well-known Gaussian prior. In the testing phase, we adopt the learned multiple mixture prior models to map the input LR features into the appropriate subspace, and finally reconstruct the corresponding HR image in a novel mixed matching way. Experimental results indicate that the proposed approach is both quantitatively and qualitatively superior to some state-of-the-art SR methods. |
189a391b217387514bfe599a0b6c1bbc1ccc94bb | We present a simple, new paradigm for the design of collision-free hash functions. Any function emanating from this paradigm is incremental. (This means that if a message x which I have previously hashed is modi ed to x0 then rather than having to re-compute the hash of x 0 from scratch, I can quickly \update" the old hash value to the new one, in time proportional to the amount of modi cation made in x to get x.) Also any function emanating from this paradigm is parallelizable, useful for hardware implementation. We derive several speci c functions from our paradigm. All use a standard hash function, assumed ideal, and some algebraic operations. The rst function, MuHASH, uses one modular multiplication per block of the message, making it reasonably e cient, and signi cantly faster than previous incremental hash functions. Its security is proven, based on the hardness of the discrete logarithm problem. A second function, AdHASH, is even faster, using additions instead of multiplications, with security proven given either that approximation of the length of shortest lattice vectors is hard or that the weighted subset sum problem is hard. A third function, LtHASH, is a practical variant of recent lattice based functions, with security proven based, again on the hardness of shortest lattice vector approximation. Dept. of Computer Science & Engineering, University of California at San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA. E-Mail: mihir@cs.ucsd.edu. URL: http://www-cse.ucsd.edu/users/mihir. Supported in part by NSF CAREER Award CCR-9624439 and a Packard Foundation Fellowship in Science and Engineering. yMIT Laboratory for Computer Science, 545 Technology Square, Cambridge, MA 02139, USA. E-Mail: miccianc@theory.lcs.mit.edu. Supported in part by DARPA contract DABT63-96-C-0018. |
9b9c9cc72ebc16596a618d5b78972437c9c569f6 | |
3ffce42ed3d7ac5963e03d4b6e32460ef5b29ff7 | We study the problem of creating a complete model of a physical object. Although this may be possible using intensity images, we use here range images which directly provide access t o three dimensional information. T h e first problem that we need t o solve is t o find the transformation between the different views. Previous approaches have either assumed this transformation t o be known (which is extremely difficult for a complete model), or computed it with feature matching (which is not accurate enough for integration). In this paper, we propose a new approach which works on range d a t a directly, and registers successive views with enough overlapping area t o get an accurate transformation between views. This is performed by minimizing afunctional which does not require point t o point matches. We give the details of the registration method and modeling procedure, and illustrate them on real range images of complex objects. 1 Introduction Creating models of physical objects is a necessary component machine of biological vision modules. Such models can then be used in object recognition, pose estimation or inspection tasks. If the object of interest has been precisely designed, then such a model exists in the form of a CAD model. In many applications, however, it is either not possible or not practical to have access to such CAD models, and we need to build models from the physical object. Some researchers bypass the problem by using a model which consist,s of multiple views ([4], [a]), but t,liis is not, always enough. If one needs a complete model of an object, the following steps are necessary: 1. data acquisition, 2. registration between views, 3. integration of views. By view we mean the 3D surface information of the object from specific point of view. While the integration process is very dependent on the representation scheme used, the precondition for performing integration consists of knowing the transformation between the data from different views. The goal of registrat,ion is to find such a transformat.ion, which is also known as t,he covrespon,den,ce problem. This problem has been a t the core of many previous research efforts: Bhanu [a] developed an object modeling system for object recognition by rotating object through known angles to acquire multiple views. Chien et al. [3] and Ahuja and Veen-stra [l] used orthogonal views to construct octree object ' models. With these methods, … |
883b2b981dc04139800f30b23a91b8d27be85b65 | In this paper, we present an efficient 3D object recognition and pose estimation approach for grasping procedures in cluttered and occluded environments. In contrast to common appearance-based approaches, we rely solely on 3D geometry information. Our method is based on a robust geometric descriptor, a hashing technique and an efficient, localized RANSAC-like sampling strategy. We assume that each object is represented by a model consisting of a set of points with corresponding surface normals. Our method simultaneously recognizes multiple model instances and estimates their pose in the scene. A variety of tests shows that the proposed method performs well on noisy, cluttered and unsegmented range scans in which only small parts of the objects are visible. The main procedure of the algorithm has a linear time complexity resulting in a high recognition speed which allows a direct integration of the method into a continuous manipulation task. The experimental validation with a 7-degrees-of-freedom Cartesian impedance controlled robot shows how the method can be used for grasping objects from a complex random stack. This application demonstrates how the integration of computer vision and softrobotics leads to a robotic system capable of acting in unstructured and occluded environments. |
9bc8aaaf23e2578c47d5d297d1e1cbb5b067ca3a | This paper describes an approach for recognizing instances of a 3D object in a single camera image and for determining their 3D poses. A hierarchical model is generated solely based on the geometry information of a 3D CAD model of the object. The approach does not rely on texture or reflectance information of the object's surface, making it useful for a wide range of industrial and robotic applications, e.g., bin-picking. A hierarchical view-based approach that addresses typical problems of previous methods is applied: It handles true perspective, is robust to noise, occlusions, and clutter to an extent that is sufficient for many practical applications, and is invariant to contrast changes. For the generation of this hierarchical model, a new model image generation technique by which scale-space effects can be taken into account is presented. The necessary object views are derived using a similarity-based aspect graph. The high robustness of an exhaustive search is combined with an efficient hierarchical search. The 3D pose is refined by using a least-squares adjustment that minimizes geometric distances in the image, yielding a position accuracy of up to 0.12 percent with respect to the object distance, and an orientation accuracy of up to 0.35 degree in our tests. The recognition time is largely independent of the complexity of the object, but depends mainly on the range of poses within which the object may appear in front of the camera. For efficiency reasons, the approach allows the restriction of the pose range depending on the application. Typical runtimes are in the range of a few hundred ms. |
dbd66f601b325404ff3cdd7b9a1a282b2da26445 | We introduce T-LESS, a new public dataset for estimating the 6D pose, i.e. translation and rotation, of texture-less rigid objects. The dataset features thirty industry-relevant objects with no significant texture and no discriminative color or reflectance properties. The objects exhibit symmetries and mutual similarities in shape and/or size. Compared to other datasets, a unique property is that some of the objects are parts of others. The dataset includes training and test images that were captured with three synchronized sensors, specifically a structured-light and a time-of-flight RGB-D sensor and a high-resolution RGB camera. There are approximately 39K training and 10K test images from each sensor. Additionally, two types of 3D models are provided for each object, i.e. a manually created CAD model and a semi-automatically reconstructed one. Training images depict individual objects against a black background. Test images originate from twenty test scenes having varying complexity, which increases from simple scenes with several isolated objects to very challenging ones with multiple instances of several objects and with a high amount of clutter and occlusion. The images were captured from a systematically sampled view sphere around the object/scene, and are annotated with accurate ground truth 6D poses of all modeled objects. Initial evaluation results indicate that the state of the art in 6D object pose estimation has ample room for improvement, especially in difficult cases with significant occlusion. The T-LESS dataset is available online at cmp:felk:cvut:cz/t-less. |
74257c2a5c9633565c3becdb9139789bcf14b478 | Despite widespread adoption of IT control frameworks, little academic empirical research has been undertaken to investigate their use. This paper reports upon research to benchmark the maturity levels of 15 key IT control processes from the Control Objectives for Information and Related Technology (COBIT) in public sector organisations across Australia. It also makes a comparison against a similar benchmark for a mixed sector group from a range of nations, a mixed sector group from Asian-Oceanic nations, and for public sector organisations for all geographic areas. The Australian data were collected in a mail survey of the 387 non-financial public sector organisations identified as having more than 50 employees, which returned a 27% response rate. Patterns seen in the original international survey undertaken by the IS Audit and Control Association in 2002 were also seen in the Australian data. However, the Australian public sector performed better than sectors in all the international benchmarks for the 15 most important IT processes. |
0e9bac6a2b51e93e73f7f5045d4252972db10b5a | We provide a novel algorithm to approximately factor large matrices with millions of rows, millions of columns, and billions of nonzero elements. Our approach rests on stochastic gradient descent (SGD), an iterative stochastic optimization algorithm. We first develop a novel "stratified" SGD variant (SSGD) that applies to general loss-minimization problems in which the loss function can be expressed as a weighted sum of "stratum losses." We establish sufficient conditions for convergence of SSGD using results from stochastic approximation theory and regenerative process theory. We then specialize SSGD to obtain a new matrix-factorization algorithm, called DSGD, that can be fully distributed and run on web-scale datasets using, e.g., MapReduce. DSGD can handle a wide variety of matrix factorizations. We describe the practical techniques used to optimize performance in our DSGD implementation. Experiments suggest that DSGD converges significantly faster and has better scalability properties than alternative algorithms. |
1109b663453e78a59e4f66446d71720ac58cec25 | We present an integrated framework for using Convolutional Networks for classification , localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat. |
062c1c1b3e280353242dd2fb3c46178b87cb5e46 | In this paper we address reinforcement learning problems with continuous state-action spaces. We propose a new algorithm, tted natural actor-critic (FNAC), that extends the work in [1] to allow for general function approximation and data reuse. We combine the natural actor-critic architecture [1] with a variant of tted value iteration using importance sampling. The method thus obtained combines the appealing features of both approaches while overcoming their main weaknesses: the use of a gradient-based actor readily overcomes the di culties found in regression methods with policy optimization in continuous action-spaces; in turn, the use of a regression-based critic allows for e cient use of data and avoids convergence problems that TD-based critics often exhibit. We establish the convergence of our algorithm and illustrate its application in a simple continuous space, continuous action problem. |
f97f0902698abff8a2bc3488e8cca223e5c357a1 | Feature selection is an important aspect of solving data-mining and machine-learning problems. This paper proposes a feature-selection method for the Support Vector Machine (SVM) learning. Like most feature-selection methods, the proposed method ranks all features in decreasing order of importance so that more relevant features can be identified. It uses a novel criterion based on the probabilistic outputs of SVM. This criterion, termed Feature-based Sensitivity of Posterior Probabilities (FSPP), evaluates the importance of a specific feature by computing the aggregate value, over the feature space, of the absolute difference of the probabilistic outputs of SVM with and without the feature. The exact form of this criterion is not easily computable and approximation is needed. Four approximations, FSPP1-FSPP4, are proposed for this purpose. The first two approximations evaluate the criterion by randomly permuting the values of the feature among samples of the training data. They differ in their choices of the mapping function from standard SVM output to its probabilistic output: FSPP1 uses a simple threshold function while FSPP2 uses a sigmoid function. The second two directly approximate the criterion but differ in the smoothness assumptions of criterion with respect to the features. The performance of these approximations, used in an overall feature-selection scheme, is then evaluated on various artificial problems and real-world problems, including datasets from the recent Neural Information Processing Systems (NIPS) feature selection competition. FSPP1-3 show good performance consistently with FSPP2 being the best overall by a slight margin. The performance of FSPP2 is competitive with some of the best performing feature-selection methods in the literature on the datasets that we have tested. Its associated computations are modest and hence it is suitable as a feature-selection method for SVM applications. |
a1c5a6438d3591819e730d8aecb776a52130c33d | A compact microstrip lowpass filter (LPF) with ultra-wide stopband using transformed stepped impedance hairpin resonator is proposed. The transformed resonator consists of a stepped impedance hairpin resonator and an embedded hexagon stub loaded coupled-line structure. Without enlarging the size, the embedded structure is introduced to get a broad stopband. A prototype LPF has been simulated, fabricated and measured, and the measurements are in good agreement with simulations. The implemented lowpass filter exhibits an ultra-wide stopband up to 12.01fc with a rejection level of 14 dB. In addition, the proposed filter features a size of 0.071λg× 0.103λg, where λg is the waveguide length at the cutoff frequency 1.45 GHz. |
70d2d4b07b5c65ef4866c7fd61f9620bffa01e29 | Climate changes and rainfall has been erratic over the past decade. Due to this in recent era, climate-smart methods called as smart agriculture is adopted by many Indian farmers. Smart agriculture is an automated and directed information technology implemented with the IOT (Internet of Things). IOT is developing rapidly and widely applied in all wireless environments. In this paper, sensor technology and wireless networks integration of IOT technology has been studied and reviewed based on the actual situation of agricultural system. A combined approach with internet and wireless communications, Remote Monitoring System (RMS) is proposed. Major objective is to collect real time data of agriculture production environment that provides easy access for agricultural facilities such as alerts through Short Massaging Service (SMS) and advices on weather pattern, crops etc. |
ea88b58158395aefbb27f4706a18dfa2fd7daa89 | Despite the considerable amount of self-disclosure in Online Social Networks (OSN), the motivation behind this phenomenon is still little understood. Building on the Privacy Calculus theory, this study fills this gap by taking a closer look at the factors behind individual self-disclosure decisions. In a Structural Equation Model with 237 subjects we find Perceived Enjoyment and Privacy Concerns to be significant determinants of information revelation. We confirm that the privacy concerns of OSN users are primarily determined by the perceived likelihood of a privacy violation and much less by the expected damage. These insights provide a solid basis for OSN providers and policy-makers in their effort to ensure healthy disclosure levels that are based on objective rationale rather than subjective misconceptions. |
9dbfcf610da740396b2b9fd75c7032f0b94896d7 | Applications that interact with database management systems (DBMSs) are ubiquitous. Such database applications are usually hosted on an application server and perform many small accesses over the network to a DBMS hosted on the database server to retrieve data for processing. For decades, the database and programming systems research communities have worked on optimizing such applications from different perspectives: database researchers have built highly efficient DBMSs, and programming systems researchers have developed specialized compilers and runtime systems for hosting applications. However, there has been relatively little work that optimizes database applications by considering these specialized systems in combination and looking for optimization opportunities that span across them. In this article, we highlight three projects that optimize database applications by looking at both the programming system and the DBMS in a holistic manner. By carefully revisiting the interface between the DBMS and the application, and by applying a mix of declarative database optimization and modern program analysis techniques, we show that a speedup of multiple orders of magnitude is possible in real-world applications. |
fdc3948f5fec24eb7cd4178aee9732ab284f1f1c | A hybrid multi-mode narrow-frame antenna for WWAN/LTE metal-rimmed smartphone applications is proposed in this paper. The ground clearance is only 5 mm × 45 mm, which is promising for narrow-frame smartphones. The metal rim with a small gap is connected to the system ground by three grounded patches. This proposed antenna can excite three coupled-loop modes and one slot mode. By incorporating these four modes, the proposed antenna can provide coverage for GSM850/900, DCS/PCS/UMTS2100, and LTE2300/2500 operations. Detailed design considerations of the proposed antenna are described, and both experimental and simulated results are also presented. |
021f37e9da69ea46fba9d2bf4e7ca3e8ba7b3448 | An ultrawideband solar Vivaldi antenna is proposed. Cut from amorphous silicon cells, it maintains a peak power at 4.25 V, which overcomes a need for lossy power management components. The wireless communications device can yield solar energy or function as a rectenna for dual-source energy harvesting. The solar Vivaldi performs with 0.5-2.8 dBi gain from 0.95-2.45 GHz , and in rectenna mode, it covers three bands for wireless energy scavenging. |
592a6d781309423ceb95502e92e577ef5656de0d | Neural encoder-decoder models of machine translation have achieved impressive results, rivalling traditional translation models. However their modelling formulation is overly simplistic, and omits several key inductive biases built into traditional models. In this paper we extend the attentional neural translation model to include structural biases from word based alignment models, including positional bias, Markov conditioning, fertility and agreement over translation directions. We show improvements over a baseline attentional model and standard phrase-based model over several language pairs, evaluating on difficult languages in a low resource setting. |
9ebe089caca6d78ff525856c7a828884724b9039 | Bayesian approaches provide a principled solution to the explorationexploitation trade-off in Reinforcement Learning. Typical approaches, however, either assume a fully observable environment or scale poorly. This work introduces the Factored Bayes-Adaptive POMDP model, a framework that is able to exploit the underlying structure while learning the dynamics in partially observable systems. We also present a belief tracking method to approximate the joint posterior over state and model variables, and an adaptation of the Monte-Carlo Tree Search solution method, which together are capable of solving the underlying problem near-optimally. Our method is able to learn efficiently given a known factorization or also learn the factorization and the model parameters at the same time. We demonstrate that this approach is able to outperform current methods and tackle problems that were previously infeasible. |
b3a18280f63844e2178d8f82bc369fcf3ae6d161 | Word embedding is a popular framework that represents text data as vectors of real numbers. These vectors capture semantics in language, and are used in a variety of natural language processing and machine learning applications. Despite these useful properties, word embeddings derived from ordinary language corpora necessarily exhibit human biases [6]. We measure direct and indirect gender bias for occupation word vectors produced by the GloVe word embedding algorithm [9], then modify this algorithm to produce an embedding with less bias to mitigate amplifying the bias in downstream applications utilizing this embedding. |
08a6e999532544e83618c16a96f6d4c7356bc140 | |
0c35a65a99af8202fe966c5e7bee00dea7cfcbf8 | This article describes the software architecture of an auto nomous, interactive tour-guide robot. It presents a modular and distributed software archi te ture, which integrates localization, mapping, collision avoidance, planning, and vari ous modules concerned with user interaction and Web-based telepresence. At its heart, the s oftware approach relies on probabilistic computation, on-line learning, and any-time alg orithms. It enables robots to operate safely, reliably, and at high speeds in highly dynamic environments, and does not require any modifications of the environment to aid the robot ’s peration. Special emphasis is placed on the design of interactive capabilities that appeal to people’s intuition. The interface provides new means for human-robot interaction w ith crowds of people in public places, and it also provides people all around the world with the ability to establish a “virtual telepresence” using the Web. To illustrate our approac h, results are reported obtained in mid-1997, when our robot “RHINO” was deployed for a period of six days in a densely populated museum. The empirical results demonstrate relia bl operation in public environments. The robot successfully raised the museum’s atten dance by more than 50%. In addition, thousands of people all over the world controlled the robot through the Web. We conjecture that these innovations transcend to a much large r range of application domains for service robots. |
66479c2251088dae51c228341c26164f21250593 | |
2c521847f2c6801d8219a1a2e9f4e196798dd07d | |
c0e97ca70fe29db4ceb834464576b699ef8874b1 | This letter presents a novel semantic mapping approach, Recurrent-OctoMap, learned from long-term three-dimensional (3-D) Lidar data. Most existing semantic mapping approaches focus on improving semantic understanding of single frames, rather than 3-D refinement of semantic maps (i.e. fusing semantic observations). The most widely used approach for the 3-D semantic map refinement is “Bayes update,” which fuses the consecutive predictive probabilities following a Markov-chain model. Instead, we propose a learning approach to fuse the semantic features, rather than simply fusing predictions from a classifier. In our approach, we represent and maintain our 3-D map as an OctoMap, and model each cell as a recurrent neural network, to obtain a Recurrent-OctoMap. In this case, the semantic mapping process can be formulated as a sequence-to-sequence encoding–decoding problem. Moreover, in order to extend the duration of observations in our Recurrent-OctoMap, we developed a robust 3-D localization and mapping system for successively mapping a dynamic environment using more than two weeks of data, and the system can be trained and deployed with arbitrary memory length. We validate our approach on the ETH long-term 3-D Lidar dataset. The experimental results show that our proposed approach outperforms the conventional “Bayes update” approach. |
1d3ddcefe4d5fefca04fe730ca73312e2c588b3b | Student retention is an essential part of many enrollment management systems. It affects university rankings, school reputation, and financial wellbeing. Student retention has become one of the most important priorities for decision makers in higher education institutions. Improving student retention starts with a thorough understanding of the reasons behind the attrition. Such an understanding is the basis for accurately predicting at-risk students and appropriately intervening to retain them. In this study, using five years of institutional data along with several data mining techniques (both individuals as well as ensembles), we developed analytical models to predict and to explain the reasons behind freshmen student attrition. The comparative analyses results showed that the ensembles performed better than individual models, while the balanced dataset produced better prediction results than the unbalanced dataset. The sensitivity analysis of the Purchase Export Previous article Next article Check if you have access through your login credentials or your institution. |
1b3b22b95ab55853aff3ea980a5b4a76b7537980 | A reported weakness of C4.5 in domains with continuous attributes is addressed by modifying the formation and evaluation of tests on continuous attributes. An MDL-inspired penalty is applied to such tests, eliminating some of them from consideration and altering the relative desirability of all tests. Empirical trials show that the modi cations lead to smaller decision trees with higher predictive accuracies. Results also con rm that a new version of C4.5 incorporating these changes is superior to recent approaches that use global discretization and that construct small trees with multi-interval splits. |
1060ff9852dc12e05ec44bee7268efdc76f7535d | Computing optical flow between any pair of Internet face photos is challenging for most current state of the art flow estimation methods due to differences in illumination, pose, and geometry. We show that flow estimation can be dramatically improved by leveraging a large photo collection of the same (or similar) object. In particular, consider the case of photos of a celebrity from Google Image Search. Any two such photos may have different facial expression, lighting and face orientation. The key idea is that instead of computing flow directly between the input pair (I, J), we compute versions of the images (I', J') in which facial expressions and pose are normalized while lighting is preserved. This is achieved by iteratively projecting each photo onto an appearance subspace formed from the full photo collection. The desired flow is obtained through concatenation of flows (I → I') o (J' → J). Our approach can be used with any two-frame optical flow algorithm, and significantly boosts the performance of the algorithm by providing invariance to lighting and shape changes. |
823964b144009f7c395cd09de9a70fe06542cc84 | Electrical power generation is changing dramatically across the world because of the need to reduce greenhouse gas emissions and to introduce mixed energy sources. The power network faces great challenges in transmission and distribution to meet demand with unpredictable daily and seasonal variations. Electrical Energy Storage (EES) is recognized as underpinning technologies to have great potential in meeting these challenges, whereby energy is stored in a certain state, according to the technology used, and is converted to electrical energy when needed. However, the wide variety of options and complex characteristic matrices make it difficult to appraise a specific EES technology for a particular application. This paper intends to mitigate this problem by providing a comprehensive and clear picture of the state-of-the-art technologies available, and where they would be suited for integration into a power generation and distribution system. The paper starts with an overview of the operation principles, technical and economic performance features and the current research and development of important EES technologies, sorted into six main categories based on the types of energy stored. Following this, a comprehensive comparison and an application potential analysis of the reviewed technologies are presented. 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/3.0/). |