Unnamed: 0
int64 0
41k
| title
stringlengths 4
274
| category
stringlengths 5
18
| summary
stringlengths 22
3.66k
| theme
stringclasses 8
values |
---|---|---|---|---|
40,911 | An Architecture of Active Learning SVMs with Relevance Feedback for
Classifying E-mail | cs.IR | In this paper, we have proposed an architecture of active learning SVMs with
relevance feedback (RF)for classifying e-mail. This architecture combines both
active learning strategies where instead of using a randomly selected training
set, the learner has access to a pool of unlabeled instances and can request
the labels of some number of them and relevance feedback where if any mail
misclassified then the next set of support vectors will be different from the
present set otherwise the next set will not change. Our proposed architecture
will ensure that a legitimate e-mail will not be dropped in the event of
overflowing mailbox. The proposed architecture also exhibits dynamic updating
characteristics making life as difficult for the spammer as possible. | computer science |
40,912 | Indexability, concentration, and VC theory | cs.DS | Degrading performance of indexing schemes for exact similarity search in high
dimensions has long since been linked to histograms of distributions of
distances and other 1-Lipschitz functions getting concentrated. We discuss this
observation in the framework of the phenomenon of concentration of measure on
the structures of high dimension and the Vapnik-Chervonenkis theory of
statistical learning. | computer science |
40,913 | A Smoothing Stochastic Gradient Method for Composite Optimization | math.OC | We consider the unconstrained optimization problem whose objective function
is composed of a smooth and a non-smooth conponents where the smooth component
is the expectation a random function. This type of problem arises in some
interesting applications in machine learning. We propose a stochastic gradient
descent algorithm for this class of optimization problem. When the non-smooth
component has a particular structure, we propose another stochastic gradient
descent algorithm by incorporating a smoothing method into our first algorithm.
The proofs of the convergence rates of these two algorithms are given and we
show the numerical performance of our algorithm by applying them to regularized
linear regression problems with different sets of synthetic data. | computer science |
40,914 | Clustering under Perturbation Resilience | cs.LG | Motivated by the fact that distances between data points in many real-world
clustering instances are often based on heuristic measures, Bilu and
Linial~\cite{BL} proposed analyzing objective based clustering problems under
the assumption that the optimum clustering to the objective is preserved under
small multiplicative perturbations to distances between points. The hope is
that by exploiting the structure in such instances, one can overcome worst case
hardness results.
In this paper, we provide several results within this framework. For
center-based objectives, we present an algorithm that can optimally cluster
instances resilient to perturbations of factor $(1 + \sqrt{2})$, solving an
open problem of Awasthi et al.~\cite{ABS10}. For $k$-median, a center-based
objective of special interest, we additionally give algorithms for a more
relaxed assumption in which we allow the optimal solution to change in a small
$\epsilon$ fraction of the points after perturbation. We give the first bounds
known for $k$-median under this more realistic and more general assumption. We
also provide positive results for min-sum clustering which is typically a
harder objective than center-based objectives from approximability standpoint.
Our algorithms are based on new linkage criteria that may be of independent
interest.
Additionally, we give sublinear-time algorithms, showing algorithms that can
return an implicit clustering from only access to a small random sample. | computer science |
40,915 | Multi-timescale Nexting in a Reinforcement Learning Robot | cs.LG | The term "nexting" has been used by psychologists to refer to the propensity
of people and many other animals to continually predict what will happen next
in an immediate, local, and personal sense. The ability to "next" constitutes a
basic kind of awareness and knowledge of one's environment. In this paper we
present results with a robot that learns to next in real time, predicting
thousands of features of the world's state, including all sensory inputs, at
timescales from 0.1 to 8 seconds. This was achieved by treating each state
feature as a reward-like target and applying temporal-difference methods to
learn a corresponding value function with a discount rate corresponding to the
timescale. We show that two thousand predictions, each dependent on six
thousand state features, can be learned and updated online at better than 10Hz
on a laptop computer, using the standard TD(lambda) algorithm with linear
function approximation. We show that this approach is efficient enough to be
practical, with most of the learning complete within 30 minutes. We also show
that a single tile-coded feature representation suffices to accurately predict
many different signals at a significant range of timescales. Finally, we show
that the accuracy of our learned predictions compares favorably with the
optimal off-line solution. | computer science |
40,916 | SLA Establishment with Guaranteed QoS in the Interdomain Network: A
Stock Model | cs.NI | The new model that we present in this paper is introduced in the context of
guaranteed QoS and resources management in the inter-domain routing framework.
This model, called the stock model, is based on a reverse cascade approach and
is applied in a distributed context. So transit providers have to learn the
right capacities to buy and to stock and, therefore learning theory is applied
through an iterative process. We show that transit providers manage to learn
how to strategically choose their capacities on each route in order to maximize
their benefits, despite the very incomplete information. Finally, we provide
and analyse some simulation results given by the application of the model in a
simple case where the model quickly converges to a stable state. | computer science |
40,917 | Using Taxonomies to Facilitate the Analysis of the Association Rules | cs.DB | The Data Mining process enables the end users to analyze, understand and use
the extracted knowledge in an intelligent system or to support in the
decision-making processes. However, many algorithms used in the process
encounter large quantities of patterns, complicating the analysis of the
patterns. This fact occurs with association rules, a Data Mining technique that
tries to identify intrinsic patterns in large data sets. A method that can help
the analysis of the association rules is the use of taxonomies in the step of
post-processing knowledge. In this paper, the GART algorithm is proposed, which
uses taxonomies to generalize association rules, and the RulEE-GAR
computational module, that enables the analysis of the generalized rules. | computer science |
40,918 | Chinese Restaurant Game - Part II: Applications to Wireless Networking,
Cloud Computing, and Online Social Networking | cs.SI | In Part I of this two-part paper [1], we proposed a new game, called Chinese
restaurant game, to analyze the social learning problem with negative network
externality. The best responses of agents in the Chinese restaurant game with
imperfect signals are constructed through a recursive method, and the influence
of both learning and network externality on the utilities of agents is studied.
In Part II of this two-part paper, we illustrate three applications of Chinese
restaurant game in wireless networking, cloud computing, and online social
networking. For each application, we formulate the corresponding problem as a
Chinese restaurant game and analyze how agents learn and make strategic
decisions in the problem. The proposed method is compared with four
common-sense methods in terms of agents' utilities and the overall system
performance through simulations. We find that the proposed Chinese restaurant
game theoretic approach indeed helps agents make better decisions and improves
the overall system performance. Furthermore, agents with different decision
orders have different advantages in terms of their utilities, which also
verifies the conclusions drawn in Part I of this two-part paper. | computer science |
40,919 | Chinese Restaurant Game - Part I: Theory of Learning with Negative
Network Externality | cs.SI | In a social network, agents are intelligent and have the capability to make
decisions to maximize their utilities. They can either make wise decisions by
taking advantages of other agents' experiences through learning, or make
decisions earlier to avoid competitions from huge crowds. Both these two
effects, social learning and negative network externality, play important roles
in the decision process of an agent. While there are existing works on either
social learning or negative network externality, a general study on considering
both these two contradictory effects is still limited. We find that the Chinese
restaurant process, a popular random process, provides a well-defined structure
to model the decision process of an agent under these two effects. By
introducing the strategic behavior into the non-strategic Chinese restaurant
process, in Part I of this two-part paper, we propose a new game, called
Chinese Restaurant Game, to formulate the social learning problem with negative
network externality. Through analyzing the proposed Chinese restaurant game, we
derive the optimal strategy of each agent and provide a recursive method to
achieve the optimal strategy. How social learning and negative network
externality influence each other under various settings is also studied through
simulations. | computer science |
40,920 | Low-rank optimization with trace norm penalty | math.OC | The paper addresses the problem of low-rank trace norm minimization. We
propose an algorithm that alternates between fixed-rank optimization and
rank-one updates. The fixed-rank optimization is characterized by an efficient
factorization that makes the trace norm differentiable in the search space and
the computation of duality gap numerically tractable. The search space is
nonlinear but is equipped with a particular Riemannian structure that leads to
efficient computations. We present a second-order trust-region algorithm with a
guaranteed quadratic rate of convergence. Overall, the proposed optimization
scheme converges super-linearly to the global solution while maintaining
complexity that is linear in the number of rows and columns of the matrix. To
compute a set of solutions efficiently for a grid of regularization parameters
we propose a predictor-corrector approach that outperforms the naive
warm-restart approach on the fixed-rank quotient manifold. The performance of
the proposed algorithm is illustrated on problems of low-rank matrix completion
and multivariate linear regression. | computer science |
40,921 | A new order theory of set systems and better quasi-orderings | math.CO | By reformulating a learning process of a set system L as a game between
Teacher (presenter of data) and Learner (updater of the abstract independent
set), we define the order type dim L of L to be the order type of the game
tree. The theory of this new order type and continuous, monotone function
between set systems corresponds to the theory of well quasi-orderings (WQOs).
As Nash-Williams developed the theory of WQOs to the theory of better
quasi-orderings (BQOs), we introduce a set system that has order type and
corresponds to a BQO. We prove that the class of set systems corresponding to
BQOs is closed by any monotone function. In (Shinohara and Arimura. "Inductive
inference of unbounded unions of pattern languages from positive data."
Theoretical Computer Science, pp. 191-209, 2000), for any set system L, they
considered the class of arbitrary (finite) unions of members of L. From
viewpoint of WQOs and BQOs, we characterize the set systems L such that the
class of arbitrary (finite) unions of members of L has order type. The
characterization shows that the order structure of the set system L with
respect to the set-inclusion is not important for the resulting set system
having order type. We point out continuous, monotone function of set systems is
similar to positive reduction to Jockusch-Owings' weakly semirecursive sets. | computer science |
40,922 | Online Learning for Classification of Low-rank Representation Features
and Its Applications in Audio Segment Classification | cs.LG | In this paper, a novel framework based on trace norm minimization for audio
segment is proposed. In this framework, both the feature extraction and
classification are obtained by solving corresponding convex optimization
problem with trace norm regularization. For feature extraction, robust
principle component analysis (robust PCA) via minimization a combination of the
nuclear norm and the $\ell_1$-norm is used to extract low-rank features which
are robust to white noise and gross corruption for audio segments. These
low-rank features are fed to a linear classifier where the weight and bias are
learned by solving similar trace norm constrained problems. For this
classifier, most methods find the weight and bias in batch-mode learning, which
makes them inefficient for large-scale problems. In this paper, we propose an
online framework using accelerated proximal gradient method. This framework has
a main advantage in memory cost. In addition, as a result of the regularization
formulation of matrix classification, the Lipschitz constant was given
explicitly, and hence the step size estimation of general proximal gradient
method was omitted in our approach. Experiments on real data sets for
laugh/non-laugh and applause/non-applause classification indicate that this
novel framework is effective and noise robust. | computer science |
40,923 | A Scalable Multiclass Algorithm for Node Classification | cs.LG | We introduce a scalable algorithm, MUCCA, for multiclass node classification
in weighted graphs. Unlike previously proposed methods for the same task, MUCCA
works in time linear in the number of nodes. Our approach is based on a
game-theoretic formulation of the problem in which the test labels are
expressed as a Nash Equilibrium of a certain game. However, in order to achieve
scalability, we find the equilibrium on a spanning tree of the original graph.
Experiments on real-world data reveal that MUCCA is much faster than its
competitors while achieving a similar predictive performance. | computer science |
40,924 | Ordinal Rating of Network Performance and Inference by Matrix Completion | cs.NI | This paper addresses the large-scale acquisition of end-to-end network
performance. We made two distinct contributions: ordinal rating of network
performance and inference by matrix completion. The former reduces measurement
costs and unifies various metrics which eases their processing in applications.
The latter enables scalable and accurate inference with no requirement of
structural information of the network nor geometric constraints. By combining
both, the acquisition problem bears strong similarities to recommender systems.
This paper investigates the applicability of various matrix factorization
models used in recommender systems. We found that the simple regularized matrix
factorization is not only practical but also produces accurate results that are
beneficial for peer selection. | computer science |
40,925 | The complexity of learning halfspaces using generalized linear methods | cs.LG | Many popular learning algorithms (E.g. Regression, Fourier-Transform based
algorithms, Kernel SVM and Kernel ridge regression) operate by reducing the
problem to a convex optimization problem over a vector space of functions.
These methods offer the currently best approach to several central problems
such as learning half spaces and learning DNF's. In addition they are widely
used in numerous application domains. Despite their importance, there are still
very few proof techniques to show limits on the power of these algorithms.
We study the performance of this approach in the problem of (agnostically and
improperly) learning halfspaces with margin $\gamma$. Let $\mathcal{D}$ be a
distribution over labeled examples. The $\gamma$-margin error of a hyperplane
$h$ is the probability of an example to fall on the wrong side of $h$ or at a
distance $\le\gamma$ from it. The $\gamma$-margin error of the best $h$ is
denoted $\mathrm{Err}_\gamma(\mathcal{D})$. An $\alpha(\gamma)$-approximation
algorithm receives $\gamma,\epsilon$ as input and, using i.i.d. samples of
$\mathcal{D}$, outputs a classifier with error rate $\le
\alpha(\gamma)\mathrm{Err}_\gamma(\mathcal{D}) + \epsilon$. Such an algorithm
is efficient if it uses $\mathrm{poly}(\frac{1}{\gamma},\frac{1}{\epsilon})$
samples and runs in time polynomial in the sample size.
The best approximation ratio achievable by an efficient algorithm is
$O\left(\frac{1/\gamma}{\sqrt{\log(1/\gamma)}}\right)$ and is achieved using an
algorithm from the above class. Our main result shows that the approximation
ratio of every efficient algorithm from this family must be $\ge
\Omega\left(\frac{1/\gamma}{\mathrm{poly}\left(\log\left(1/\gamma\right)\right)}\right)$,
essentially matching the best known upper bound. | computer science |
40,926 | Explosion prediction of oil gas using SVM and Logistic Regression | cs.CE | The prevention of dangerous chemical accidents is a primary problem of
industrial manufacturing. In the accidents of dangerous chemicals, the oil gas
explosion plays an important role. The essential task of the explosion
prevention is to estimate the better explosion limit of a given oil gas. In
this paper, Support Vector Machines (SVM) and Logistic Regression (LR) are used
to predict the explosion of oil gas. LR can get the explicit probability
formula of explosion, and the explosive range of the concentrations of oil gas
according to the concentration of oxygen. Meanwhile, SVM gives higher accuracy
of prediction. Furthermore, considering the practical requirements, the effects
of penalty parameter on the distribution of two types of errors are discussed. | computer science |
40,927 | On Calibrated Predictions for Auction Selection Mechanisms | cs.GT | Calibration is a basic property for prediction systems, and algorithms for
achieving it are well-studied in both statistics and machine learning. In many
applications, however, the predictions are used to make decisions that select
which observations are made. This makes calibration difficult, as adjusting
predictions to achieve calibration changes future data. We focus on
click-through-rate (CTR) prediction for search ad auctions. Here, CTR
predictions are used by an auction that determines which ads are shown, and we
want to maximize the value generated by the auction.
We show that certain natural notions of calibration can be impossible to
achieve, depending on the details of the auction. We also show that it can be
impossible to maximize auction efficiency while using calibrated predictions.
Finally, we give conditions under which calibration is achievable and
simultaneously maximizes auction efficiency: roughly speaking, bids and queries
must not contain information about CTRs that is not already captured by the
predictions. | computer science |
40,929 | Online Stochastic Optimization with Multiple Objectives | cs.LG | In this paper we propose a general framework to characterize and solve the
stochastic optimization problems with multiple objectives underlying many real
world learning applications. We first propose a projection based algorithm
which attains an $O(T^{-1/3})$ convergence rate. Then, by leveraging on the
theory of Lagrangian in constrained optimization, we devise a novel primal-dual
stochastic approximation algorithm which attains the optimal convergence rate
of $O(T^{-1/2})$ for general Lipschitz continuous objectives. | computer science |
40,930 | Quantum support vector machine for big data classification | cs.LG | Supervised machine learning is the classification of new data based on
already classified training examples. In this work, we show that the support
vector machine, an optimized binary classifier, can be implemented on a quantum
computer, with complexity logarithmic in the size of the vectors and the number
of training examples. In cases when classical sampling algorithms require
polynomial time, an exponential speed-up is obtained. At the core of this
quantum big data algorithm is a non-sparse matrix exponentiation technique for
efficiently performing a matrix inversion of the training data inner-product
(kernel) matrix. | computer science |
40,931 | Investigating the Detection of Adverse Drug Events in a UK General
Practice Electronic Health-Care Database | cs.CE | Data-mining techniques have frequently been developed for Spontaneous
reporting databases. These techniques aim to find adverse drug events
accurately and efficiently. Spontaneous reporting databases are prone to
missing information, under reporting and incorrect entries. This often results
in a detection lag or prevents the detection of some adverse drug events. These
limitations do not occur in electronic health-care databases. In this paper,
existing methods developed for spontaneous reporting databases are implemented
on both a spontaneous reporting database and a general practice electronic
health-care database and compared. The results suggests that the application of
existing methods to the general practice database may help find signals that
have gone undetected when using the spontaneous reporting system database. In
addition the general practice database provides far more supplementary
information, that if incorporated in analysis could provide a wealth of
information for identifying adverse events more accurately. | computer science |
40,932 | Application of a clustering framework to UK domestic electricity data | cs.CE | This paper takes an approach to clustering domestic electricity load profiles
that has been successfully used with data from Portugal and applies it to UK
data. Clustering techniques are applied and it is found that the preferred
technique in the Portuguese work (a two stage process combining Self Organised
Maps and Kmeans) is not appropriate for the UK data. The work shows that up to
nine clusters of households can be identified with the differences in usage
profiles being visually striking. This demonstrates the appropriateness of
breaking the electricity usage patterns down to more detail than the two load
profiles currently published by the electricity industry. The paper details
initial results using data collected in Milton Keynes around 1990. Further work
is described and will concentrate on building accurate and meaningful clusters
of similar electricity users in order to better direct demand side management
initiatives to the most relevant target customers. | computer science |
40,933 | Creating Personalised Energy Plans. From Groups to Individuals using
Fuzzy C Means Clustering | cs.CE | Changes in the UK electricity market mean that domestic users will be
required to modify their usage behaviour in order that supplies can be
maintained. Clustering allows usage profiles collected at the household level
to be clustered into groups and assigned a stereotypical profile which can be
used to target marketing campaigns. Fuzzy C Means clustering extends this by
allowing each household to be a member of many groups and hence provides the
opportunity to make personalised offers to the household dependent on their
degree of membership of each group. In addition, feedback can be provided on
how user's changing behaviour is moving them towards more "green" or cost
effective stereotypical usage. | computer science |
40,934 | Examining the Classification Accuracy of TSVMs with ?Feature Selection
in Comparison with the GLAD Algorithm | cs.LG | Gene expression data sets are used to classify and predict patient diagnostic
categories. As we know, it is extremely difficult and expensive to obtain gene
expression labelled examples. Moreover, conventional supervised approaches
cannot function properly when labelled data (training examples) are
insufficient using Support Vector Machines (SVM) algorithms. Therefore, in this
paper, we suggest Transductive Support Vector Machines (TSVMs) as
semi-supervised learning algorithms, learning with both labelled samples data
and unlabelled samples to perform the classification of microarray data. To
prune the superfluous genes and samples we used a feature selection method
called Recursive Feature Elimination (RFE), which is supposed to enhance the
output of classification and avoid the local optimization problem. We examined
the classification prediction accuracy of the TSVM-RFE algorithm in comparison
with the Genetic Learning Across Datasets (GLAD) algorithm, as both are
semi-supervised learning methods. Comparing these two methods, we found that
the TSVM-RFE surpassed both a SVM using RFE and GLAD. | computer science |
40,935 | Quiet in Class: Classification, Noise and the Dendritic Cell Algorithm | cs.LG | Theoretical analyses of the Dendritic Cell Algorithm (DCA) have yielded
several criticisms about its underlying structure and operation. As a result,
several alterations and fixes have been suggested in the literature to correct
for these findings. A contribution of this work is to investigate the effects
of replacing the classification stage of the DCA (which is known to be flawed)
with a traditional machine learning technique. This work goes on to question
the merits of those unique properties of the DCA that are yet to be thoroughly
analysed. If none of these properties can be found to have a benefit over
traditional approaches, then "fixing" the DCA is arguably less efficient than
simply creating a new algorithm. This work examines the dynamic filtering
property of the DCA and questions the utility of this unique feature for the
anomaly detection problem. It is found that this feature, while advantageous
for noisy, time-ordered classification, is not as useful as a traditional
static filter for processing a synthetic dataset. It is concluded that there
are still unique features of the DCA left to investigate. Areas that may be of
benefit to the Artificial Immune Systems community are suggested. | computer science |
40,936 | Detect adverse drug reactions for drug Alendronate | cs.CE | Adverse drug reaction (ADR) is widely concerned for public health issue. In
this study we propose an original approach to detect the ADRs using feature
matrix and feature selection. The experiments are carried out on the drug
Simvastatin. Major side effects for the drug are detected and better
performance is achieved compared to other computerized methods. The detected
ADRs are based on the computerized method, further investigation is needed. | computer science |
40,937 | Biomarker Clustering of Colorectal Cancer Data to Complement Clinical
Classification | cs.LG | In this paper, we describe a dataset relating to cellular and physical
conditions of patients who are operated upon to remove colorectal tumours. This
data provides a unique insight into immunological status at the point of tumour
removal, tumour classification and post-operative survival. Attempts are made
to cluster this dataset and important subsets of it in an effort to
characterize the data and validate existing standards for tumour
classification. It is apparent from optimal clustering that existing tumour
classification is largely unrelated to immunological factors within a patient
and that there may be scope for re-evaluating treatment options and survival
estimates based on a combination of tumour physiology and patient
histochemistry. | computer science |
40,938 | Approximate dynamic programming using fluid and diffusion approximations
with applications to power management | cs.LG | Neuro-dynamic programming is a class of powerful techniques for approximating
the solution to dynamic programming equations. In their most computationally
attractive formulations, these techniques provide the approximate solution only
within a prescribed finite-dimensional function class. Thus, the question that
always arises is how should the function class be chosen? The goal of this
paper is to propose an approach using the solutions to associated fluid and
diffusion approximations. In order to illustrate this approach, the paper
focuses on an application to dynamic speed scaling for power management in
computer processors. | computer science |
40,939 | Using Clustering to extract Personality Information from socio economic
data | cs.LG | It has become apparent that models that have been applied widely in
economics, including Machine Learning techniques and Data Mining methods,
should take into consideration principles that derive from the theories of
Personality Psychology in order to discover more comprehensive knowledge
regarding complicated economic behaviours. In this work, we present a method to
extract Behavioural Groups by using simple clustering techniques that can
potentially reveal aspects of the Personalities for their members. We believe
that this is very important because the psychological information regarding the
Personalities of individuals is limited in real world applications and because
it can become a useful tool in improving the traditional models of Knowledge
Economy. | computer science |
40,940 | Finding the creatures of habit; Clustering households based on their
flexibility in using electricity | cs.LG | Changes in the UK electricity market, particularly with the roll out of smart
meters, will provide greatly increased opportunities for initiatives intended
to change households' electricity usage patterns for the benefit of the overall
system. Users show differences in their regular behaviours and clustering
households into similar groupings based on this variability provides for
efficient targeting of initiatives. Those people who are stuck into a regular
pattern of activity may be the least receptive to an initiative to change
behaviour. A sample of 180 households from the UK are clustered into four
groups as an initial test of the concept and useful, actionable groupings are
found. | computer science |
40,941 | Unsupervised Gene Expression Data using Enhanced Clustering Method | cs.CE | Microarrays are made it possible to simultaneously monitor the expression
profiles of thousands of genes under various experimental conditions.
Identification of co-expressed genes and coherent patterns is the central goal
in microarray or gene expression data analysis and is an important task in
bioinformatics research. Feature selection is a process to select features
which are more informative. It is one of the important steps in knowledge
discovery. The problem is that not all features are important. Some of the
features may be redundant, and others may be irrelevant and noisy. In this work
the unsupervised Gene selection method and Enhanced Center Initialization
Algorithm (ECIA) with K-Means algorithms have been applied for clustering of
Gene Expression Data. This proposed clustering algorithm overcomes the
drawbacks in terms of specifying the optimal number of clusters and
initialization of good cluster centroids. Gene Expression Data show that could
identify compact clusters with performs well in terms of the Silhouette
Coefficients cluster measure. | computer science |
40,942 | Performance Analysis of Clustering Algorithms for Gene Expression Data | cs.CE | Microarray technology is a process that allows thousands of genes
simultaneously monitor to various experimental conditions. It is used to
identify the co-expressed genes in specific cells or tissues that are actively
used to make proteins, This method is used to analysis the gene expression, an
important task in bioinformatics research. Cluster analysis of gene expression
data has proved to be a useful tool for identifying co-expressed genes,
biologically relevant groupings of genes and samples. In this paper we analysed
K-Means with Automatic Generations of Merge Factor for ISODATA- AGMFI, to group
the microarray data sets on the basic of ISODATA. AGMFI is to generate initial
values for merge and Spilt factor, maximum merge times instead of selecting
efficient values as in ISODATA. The initial seeds for each cluster were
normally chosen either sequentially or randomly. The quality of the final
clusters was found to be influenced by these initial seeds. For the real life
problems, the suitable number of clusters cannot be predicted. To overcome the
above drawback the current research focused on developing the clustering
algorithms without giving the initial number of clusters. | computer science |
40,943 | A Data Management Approach for Dataset Selection Using Human Computation | cs.LG | As the number of applications that use machine learning algorithms increases,
the need for labeled data useful for training such algorithms intensifies.
Getting labels typically involves employing humans to do the annotation,
which directly translates to training and working costs. Crowdsourcing
platforms have made labeling cheaper and faster, but they still involve
significant costs, especially for the cases where the potential set of
candidate data to be labeled is large. In this paper we describe a methodology
and a prototype system aiming at addressing this challenge for Web-scale
problems in an industrial setting. We discuss ideas on how to efficiently
select the data to use for training of machine learning algorithms in an
attempt to reduce cost. We show results achieving good performance with reduced
cost by carefully selecting which instances to label. Our proposed algorithm is
presented as part of a framework for managing and generating training datasets,
which includes, among other components, a human computation element. | computer science |
40,944 | On Analyzing Estimation Errors due to Constrained Connections in Online
Review Systems | cs.SI | Constrained connection is the phenomenon that a reviewer can only review a
subset of products/services due to narrow range of interests or limited
attention capacity. In this work, we study how constrained connections can
affect estimation performance in online review systems (ORS). We find that
reviewers' constrained connections will cause poor estimation performance, both
from the measurements of estimation accuracy and Bayesian Cramer Rao lower
bound. | computer science |
40,945 | A Comprehensive Evaluation of Machine Learning Techniques for Cancer
Class Prediction Based on Microarray Data | cs.LG | Prostate cancer is among the most common cancer in males and its
heterogeneity is well known. Its early detection helps making therapeutic
decision. There is no standard technique or procedure yet which is full-proof
in predicting cancer class. The genomic level changes can be detected in gene
expression data and those changes may serve as standard model for any random
cancer data for class prediction. Various techniques were implied on prostate
cancer data set in order to accurately predict cancer class including machine
learning techniques. Huge number of attributes and few number of sample in
microarray data leads to poor machine learning, therefore the most challenging
part is attribute reduction or non significant gene reduction. In this work we
have compared several machine learning techniques for their accuracy in
predicting the cancer class. Machine learning is effective when number of
attributes (genes) are larger than the number of samples which is rarely
possible with gene expression data. Attribute reduction or gene filtering is
absolutely required in order to make the data more meaningful as most of the
genes do not participate in tumor development and are irrelevant for cancer
prediction. Here we have applied combination of statistical techniques such as
inter-quartile range and t-test, which has been effective in filtering
significant genes and minimizing noise from data. Further we have done a
comprehensive evaluation of ten state-of-the-art machine learning techniques
for their accuracy in class prediction of prostate cancer. Out of these
techniques, Bayes Network out performed with an accuracy of 94.11% followed by
Navie Bayes with an accuracy of 91.17%. To cross validate our results, we
modified our training dataset in six different way and found that average
sensitivity, specificity, precision and accuracy of Bayes Network is highest
among all other techniques used. | computer science |
40,946 | MixedGrad: An O(1/T) Convergence Rate Algorithm for Stochastic Smooth
Optimization | cs.LG | It is well known that the optimal convergence rate for stochastic
optimization of smooth functions is $O(1/\sqrt{T})$, which is same as
stochastic optimization of Lipschitz continuous convex functions. This is in
contrast to optimizing smooth functions using full gradients, which yields a
convergence rate of $O(1/T^2)$. In this work, we consider a new setup for
optimizing smooth functions, termed as {\bf Mixed Optimization}, which allows
to access both a stochastic oracle and a full gradient oracle. Our goal is to
significantly improve the convergence rate of stochastic optimization of smooth
functions by having an additional small number of accesses to the full gradient
oracle. We show that, with an $O(\ln T)$ calls to the full gradient oracle and
an $O(T)$ calls to the stochastic oracle, the proposed mixed optimization
algorithm is able to achieve an optimization error of $O(1/T)$. | computer science |
40,947 | A Review of Machine Learning based Anomaly Detection Techniques | cs.LG | Intrusion detection is so much popular since the last two decades where
intrusion is attempted to break into or misuse the system. It is mainly of two
types based on the intrusions, first is Misuse or signature based detection and
the other is Anomaly detection. In this paper Machine learning based methods
which are one of the types of Anomaly detection techniques is discussed. | computer science |
40,948 | Participation anticipating in elections using data mining methods | cs.CY | Anticipating the political behavior of people will be considerable help for
election candidates to assess the possibility of their success and to be
acknowledged about the public motivations to select them. In this paper, we
provide a general schematic of the architecture of participation anticipating
system in presidential election by using KNN, Classification Tree and Na\"ive
Bayes and tools orange based on crisp which had hopeful output. To test and
assess the proposed model, we begin to use the case study by selecting 100
qualified persons who attend in 11th presidential election of Islamic republic
of Iran and anticipate their participation in Kohkiloye & Boyerahmad. We
indicate that KNN can perform anticipation and classification processes with
high accuracy in compared with two other algorithms to anticipate
participation. | computer science |
40,949 | Data mining application for cyber space users tendency in blog writing:
a case study | cs.CY | Blogs are the recent emerging media which relies on information technology
and technological advance. Since the mass media in some less-developed and
developing countries are in government service and their policies are developed
based on governmental interests, so blogs are provided for ideas and exchanging
opinions. In this paper, we highlighted performed simulations from obtained
information from 100 users and bloggers in Kohkiloye and Boyer Ahmad Province
and using Weka 3.6 tool and c4.5 algorithm by applying decision tree with more
than %82 precision for getting future tendency anticipation of users to
blogging and using in strategically areas. | computer science |
40,950 | A Study on Classification in Imbalanced and Partially-Labelled Data
Streams | cs.LG | The domain of radio astronomy is currently facing significant computational
challenges, foremost amongst which are those posed by the development of the
world's largest radio telescope, the Square Kilometre Array (SKA). Preliminary
specifications for this instrument suggest that the final design will
incorporate between 2000 and 3000 individual 15 metre receiving dishes, which
together can be expected to produce a data rate of many TB/s. Given such a high
data rate, it becomes crucial to consider how this information will be
processed and stored to maximise its scientific utility. In this paper, we
consider one possible data processing scenario for the SKA, for the purposes of
an all-sky pulsar survey. In particular we treat the selection of promising
signals from the SKA processing pipeline as a data stream classification
problem. We consider the feasibility of classifying signals that arrive via an
unlabelled and heavily class imbalanced data stream, using currently available
algorithms and frameworks. Our results indicate that existing stream learners
exhibit unacceptably low recall on real astronomical data when used in standard
configuration; however, good false positive performance and comparable accuracy
to static learners, suggests they have definite potential as an on-line
solution to this particular big data challenge. | computer science |
40,951 | Multiple Kernel Learning in the Primal for Multi-modal Alzheimer's
Disease Classification | cs.LG | To achieve effective and efficient detection of Alzheimer's disease (AD),
many machine learning methods have been introduced into this realm. However,
the general case of limited training samples, as well as different feature
representations typically makes this problem challenging. In this work, we
propose a novel multiple kernel learning framework to combine multi-modal
features for AD classification, which is scalable and easy to implement.
Contrary to the usual way of solving the problem in the dual space, we look at
the optimization from a new perspective. By conducting Fourier transform on the
Gaussian kernel, we explicitly compute the mapping function, which leads to a
more straightforward solution of the problem in the primal space. Furthermore,
we impose the mixed $L_{21}$ norm constraint on the kernel weights, known as
the group lasso regularization, to enforce group sparsity among different
feature modalities. This actually acts as a role of feature modality selection,
while at the same time exploiting complementary information among different
kernels. Therefore it is able to extract the most discriminative features for
classification. Experiments on the ADNI data set demonstrate the effectiveness
of the proposed method. | computer science |
40,952 | MINT: Mutual Information based Transductive Feature Selection for
Genetic Trait Prediction | cs.LG | Whole genome prediction of complex phenotypic traits using high-density
genotyping arrays has attracted a great deal of attention, as it is relevant to
the fields of plant and animal breeding and genetic epidemiology. As the number
of genotypes is generally much bigger than the number of samples, predictive
models suffer from the curse-of-dimensionality. The curse-of-dimensionality
problem not only affects the computational efficiency of a particular genomic
selection method, but can also lead to poor performance, mainly due to
correlation among markers. In this work we proposed the first transductive
feature selection method based on the MRMR (Max-Relevance and Min-Redundancy)
criterion which we call MINT. We applied MINT on genetic trait prediction
problems and showed that in general MINT is a better feature selection method
than the state-of-the-art inductive method mRMR. | computer science |
40,953 | Predicting Students' Performance Using ID3 And C4.5 Classification
Algorithms | cs.CY | An educational institution needs to have an approximate prior knowledge of
enrolled students to predict their performance in future academics. This helps
them to identify promising students and also provides them an opportunity to
pay attention to and improve those who would probably get lower grades. As a
solution, we have developed a system which can predict the performance of
students from their previous performances using concepts of data mining
techniques under Classification. We have analyzed the data set containing
information about students, such as gender, marks scored in the board
examinations of classes X and XII, marks and rank in entrance examinations and
results in first year of the previous batch of students. By applying the ID3
(Iterative Dichotomiser 3) and C4.5 classification algorithms on this data, we
have predicted the general and individual performance of freshly admitted
students in future examinations. | computer science |
40,954 | Bandits with Switching Costs: T^{2/3} Regret | cs.LG | We study the adversarial multi-armed bandit problem in a setting where the
player incurs a unit cost each time he switches actions. We prove that the
player's $T$-round minimax regret in this setting is
$\widetilde{\Theta}(T^{2/3})$, thereby closing a fundamental gap in our
understanding of learning with bandit feedback. In the corresponding
full-information version of the problem, the minimax regret is known to grow at
a much slower rate of $\Theta(\sqrt{T})$. The difference between these two
rates provides the \emph{first} indication that learning with bandit feedback
can be significantly harder than learning with full-information feedback
(previous results only showed a different dependence on the number of actions,
but not on $T$.)
In addition to characterizing the inherent difficulty of the multi-armed
bandit problem with switching costs, our results also resolve several other
open problems in online learning. One direct implication is that learning with
bandit feedback against bounded-memory adaptive adversaries has a minimax
regret of $\widetilde{\Theta}(T^{2/3})$. Another implication is that the
minimax regret of online learning in adversarial Markov decision processes
(MDPs) is $\widetilde{\Theta}(T^{2/3})$. The key to all of our results is a new
randomized construction of a multi-scale random walk, which is of independent
interest and likely to prove useful in additional settings. | computer science |
40,955 | Joint Indoor Localization and Radio Map Construction with Limited
Deployment Load | cs.NI | One major bottleneck in the practical implementation of received signal
strength (RSS) based indoor localization systems is the extensive deployment
efforts required to construct the radio maps through fingerprinting. In this
paper, we aim to design an indoor localization scheme that can be directly
employed without building a full fingerprinted radio map of the indoor
environment. By accumulating the information of localized RSSs, this scheme can
also simultaneously construct the radio map with limited calibration. To design
this scheme, we employ a source data set that possesses the same spatial
correlation of the RSSs in the indoor environment under study. The knowledge of
this data set is then transferred to a limited number of calibration
fingerprints and one or several RSS observations with unknown locations, in
order to perform direct localization of these observations using manifold
alignment. We test two different source data sets, namely a simulated radio
propagation map and the environments plan coordinates. For moving users, we
exploit the correlation of their observations to improve the localization
accuracy. The online testing in two indoor environments shows that the plan
coordinates achieve better results than the simulated radio maps, and a
negligible degradation with 70-85% reduction in calibration load. | computer science |
40,956 | An Extreme Learning Machine Approach to Predicting Near Chaotic HCCI
Combustion Phasing in Real-Time | cs.LG | Fuel efficient Homogeneous Charge Compression Ignition (HCCI) engine
combustion timing predictions must contend with non-linear chemistry,
non-linear physics, period doubling bifurcation(s), turbulent mixing, model
parameters that can drift day-to-day, and air-fuel mixture state information
that cannot typically be resolved on a cycle-to-cycle basis, especially during
transients. In previous work, an abstract cycle-to-cycle mapping function
coupled with $\epsilon$-Support Vector Regression was shown to predict
experimentally observed cycle-to-cycle combustion timing over a wide range of
engine conditions, despite some of the aforementioned difficulties. The main
limitation of the previous approach was that a partially acausual randomly
sampled training dataset was used to train proof of concept offline
predictions. The objective of this paper is to address this limitation by
proposing a new online adaptive Extreme Learning Machine (ELM) extension named
Weighted Ring-ELM. This extension enables fully causal combustion timing
predictions at randomly chosen engine set points, and is shown to achieve
results that are as good as or better than the previous offline method. The
broader objective of this approach is to enable a new class of real-time model
predictive control strategies for high variability HCCI and, ultimately, to
bring HCCI's low engine-out NOx and reduced CO2 emissions to production
engines. | computer science |
40,957 | Predicting college basketball match outcomes using machine learning
techniques: some results and lessons learned | cs.LG | Most existing work on predicting NCAAB matches has been developed in a
statistical context. Trusting the capabilities of ML techniques, particularly
classification learners, to uncover the importance of features and learn their
relationships, we evaluated a number of different paradigms on this task. In
this paper, we summarize our work, pointing out that attributes seem to be more
important than models, and that there seems to be an upper limit to predictive
quality. | computer science |
40,958 | Exact Learning of RNA Energy Parameters From Structure | cs.LG | We consider the problem of exact learning of parameters of a linear RNA
energy model from secondary structure data. A necessary and sufficient
condition for learnability of parameters is derived, which is based on
computing the convex hull of union of translated Newton polytopes of input
sequences. The set of learned energy parameters is characterized as the convex
cone generated by the normal vectors to those facets of the resulting polytope
that are incident to the origin. In practice, the sufficient condition may not
be satisfied by the entire training data set; hence, computing a maximal subset
of training data for which the sufficient condition is satisfied is often
desired. We show that problem is NP-hard in general for an arbitrary
dimensional feature space. Using a randomized greedy algorithm, we select a
subset of RNA STRAND v2.0 database that satisfies the sufficient condition for
separate A-U, C-G, G-U base pair counting model. The set of learned energy
parameters includes experimentally measured energies of A-U, C-G, and G-U
pairs; hence, our parameter set is in agreement with the Turner parameters. | computer science |
40,959 | On Measure Concentration of Random Maximum A-Posteriori Perturbations | cs.LG | The maximum a-posteriori (MAP) perturbation framework has emerged as a useful
approach for inference and learning in high dimensional complex models. By
maximizing a randomly perturbed potential function, MAP perturbations generate
unbiased samples from the Gibbs distribution. Unfortunately, the computational
cost of generating so many high-dimensional random variables can be
prohibitive. More efficient algorithms use sequential sampling strategies based
on the expected value of low dimensional MAP perturbations. This paper develops
new measure concentration inequalities that bound the number of samples needed
to estimate such expected values. Applying the general result to MAP
perturbations can yield a more efficient algorithm to approximate sampling from
the Gibbs distribution. The measure concentration result is of general interest
and may be applicable to other areas involving expected estimations. | computer science |
40,960 | The BeiHang Keystroke Dynamics Authentication System | cs.CR | Keystroke Dynamics is an important biometric solution for person
authentication. Based upon keystroke dynamics, this paper designs an embedded
password protection device, develops an online system, collects two public
databases for promoting the research on keystroke authentication, exploits the
Gabor filter bank to characterize keystroke dynamics, and provides benchmark
results of three popular classification algorithms, one-class support vector
machine, Gaussian classifier, and nearest neighbour classifier. | computer science |
40,961 | Multiple Attractor Cellular Automata (MACA) for Addressing Major
Problems in Bioinformatics | cs.CE | CA has grown as potential classifier for addressing major problems in
bioinformatics. Lot of bioinformatics problems like predicting the protein
coding region, finding the promoter region, predicting the structure of protein
and many other problems in bioinformatics can be addressed through Cellular
Automata. Even though there are some prediction techniques addressing these
problems, the approximate accuracy level is very less. An automated procedure
was proposed with MACA (Multiple Attractor Cellular Automata) which can address
all these problems. The genetic algorithm is also used to find rules with good
fitness values. Extensive experiments are conducted for reporting the accuracy
of the proposed tool. The average accuracy of MACA when tested with ENCODE,
BG570, HMR195, Fickett and Tongue, ASP67 datasets is 78%. | computer science |
40,962 | Reinforcement Learning Framework for Opportunistic Routing in WSNs | cs.NI | Routing packets opportunistically is an essential part of multihop ad hoc
wireless sensor networks. The existing routing techniques are not adaptive
opportunistic. In this paper we have proposed an adaptive opportunistic routing
scheme that routes packets opportunistically in order to ensure that packet
loss is avoided. Learning and routing are combined in the framework that
explores the optimal routing possibilities. In this paper we implemented this
Reinforced learning framework using a customer simulator. The experimental
results revealed that the scheme is able to exploit the opportunistic to
optimize routing of packets even though the network structure is unknown. | computer science |
40,963 | A Parallel SGD method with Strong Convergence | cs.LG | This paper proposes a novel parallel stochastic gradient descent (SGD) method
that is obtained by applying parallel sets of SGD iterations (each set
operating on one node using the data residing in it) for finding the direction
in each iteration of a batch descent method. The method has strong convergence
properties. Experiments on datasets with high dimensional feature spaces show
the value of this method. | computer science |
40,964 | Optimization, Learning, and Games with Predictable Sequences | cs.LG | We provide several applications of Optimistic Mirror Descent, an online
learning algorithm based on the idea of predictable sequences. First, we
recover the Mirror Prox algorithm for offline optimization, prove an extension
to Holder-smooth functions, and apply the results to saddle-point type
problems. Next, we prove that a version of Optimistic Mirror Descent (which has
a close relation to the Exponential Weights algorithm) can be used by two
strongly-uncoupled players in a finite zero-sum matrix game to converge to the
minimax equilibrium at the rate of O((log T)/T). This addresses a question of
Daskalakis et al 2011. Further, we consider a partial information version of
the problem. We then apply the results to convex programming and exhibit a
simple algorithm for the approximate Max Flow problem. | computer science |
40,965 | From average case complexity to improper learning complexity | cs.LG | The basic problem in the PAC model of computational learning theory is to
determine which hypothesis classes are efficiently learnable. There is
presently a dearth of results showing hardness of learning problems. Moreover,
the existing lower bounds fall short of the best known algorithms.
The biggest challenge in proving complexity results is to establish hardness
of {\em improper learning} (a.k.a. representation independent learning).The
difficulty in proving lower bounds for improper learning is that the standard
reductions from $\mathbf{NP}$-hard problems do not seem to apply in this
context. There is essentially only one known approach to proving lower bounds
on improper learning. It was initiated in (Kearns and Valiant 89) and relies on
cryptographic assumptions.
We introduce a new technique for proving hardness of improper learning, based
on reductions from problems that are hard on average. We put forward a (fairly
strong) generalization of Feige's assumption (Feige 02) about the complexity of
refuting random constraint satisfaction problems. Combining this assumption
with our new technique yields far reaching implications. In particular,
1. Learning $\mathrm{DNF}$'s is hard.
2. Agnostically learning halfspaces with a constant approximation ratio is
hard.
3. Learning an intersection of $\omega(1)$ halfspaces is hard. | computer science |
40,966 | The Noisy Power Method: A Meta Algorithm with Applications | cs.DS | We provide a new robust convergence analysis of the well-known power method
for computing the dominant singular vectors of a matrix that we call the noisy
power method. Our result characterizes the convergence behavior of the
algorithm when a significant amount noise is introduced after each
matrix-vector multiplication. The noisy power method can be seen as a
meta-algorithm that has recently found a number of important applications in a
broad range of machine learning problems including alternating minimization for
matrix completion, streaming principal component analysis (PCA), and
privacy-preserving spectral analysis. Our general analysis subsumes several
existing ad-hoc convergence bounds and resolves a number of open problems in
multiple applications including streaming PCA and privacy-preserving singular
vector computation. | computer science |
40,967 | Spectral Clustering via the Power Method -- Provably | cs.LG | Spectral clustering is one of the most important algorithms in data mining
and machine intelligence; however, its computational complexity limits its
application to truly large scale data analysis. The computational bottleneck in
spectral clustering is computing a few of the top eigenvectors of the
(normalized) Laplacian matrix corresponding to the graph representing the data
to be clustered. One way to speed up the computation of these eigenvectors is
to use the "power method" from the numerical linear algebra literature.
Although the power method has been empirically used to speed up spectral
clustering, the theory behind this approach, to the best of our knowledge,
remains unexplored. This paper provides the \emph{first} such rigorous
theoretical justification, arguing that a small number of power iterations
suffices to obtain near-optimal partitionings using the approximate
eigenvectors. Specifically, we prove that solving the $k$-means clustering
problem on the approximate eigenvectors obtained via the power method gives an
additive-error approximation to solving the $k$-means problem on the optimal
eigenvectors. | computer science |
40,968 | Scalable Influence Estimation in Continuous-Time Diffusion Networks | cs.SI | If a piece of information is released from a media site, can it spread, in 1
month, to a million web pages? This influence estimation problem is very
challenging since both the time-sensitive nature of the problem and the issue
of scalability need to be addressed simultaneously. In this paper, we propose a
randomized algorithm for influence estimation in continuous-time diffusion
networks. Our algorithm can estimate the influence of every node in a network
with |V| nodes and |E| edges to an accuracy of $\varepsilon$ using
$n=O(1/\varepsilon^2)$ randomizations and up to logarithmic factors
O(n|E|+n|V|) computations. When used as a subroutine in a greedy influence
maximization algorithm, our proposed method is guaranteed to find a set of
nodes with an influence of at least (1-1/e)OPT-2$\varepsilon$, where OPT is the
optimal value. Experiments on both synthetic and real-world data show that the
proposed method can easily scale up to networks of millions of nodes while
significantly improves over previous state-of-the-arts in terms of the accuracy
of the estimated influence and the quality of the selected nodes in maximizing
the influence. | computer science |
40,969 | Extended Formulations for Online Linear Bandit Optimization | cs.LG | On-line linear optimization on combinatorial action sets (d-dimensional
actions) with bandit feedback, is known to have complexity in the order of the
dimension of the problem. The exponential weighted strategy achieves the best
known regret bound that is of the order of $d^{2}\sqrt{n}$ (where $d$ is the
dimension of the problem, $n$ is the time horizon). However, such strategies
are provably suboptimal or computationally inefficient. The complexity is
attributed to the combinatorial structure of the action set and the dearth of
efficient exploration strategies of the set. Mirror descent with entropic
regularization function comes close to solving this problem by enforcing a
meticulous projection of weights with an inherent boundary condition. Entropic
regularization in mirror descent is the only known way of achieving a
logarithmic dependence on the dimension. Here, we argue otherwise and recover
the original intuition of exponential weighting by borrowing a technique from
discrete optimization and approximation algorithms called `extended
formulation'. Such formulations appeal to the underlying geometry of the set
with a guaranteed logarithmic dependence on the dimension underpinned by an
information theoretic entropic analysis. | computer science |
40,970 | Recommending with an Agenda: Active Learning of Private Attributes using
Matrix Factorization | cs.LG | Recommender systems leverage user demographic information, such as age,
gender, etc., to personalize recommendations and better place their targeted
ads. Oftentimes, users do not volunteer this information due to privacy
concerns, or due to a lack of initiative in filling out their online profiles.
We illustrate a new threat in which a recommender learns private attributes of
users who do not voluntarily disclose them. We design both passive and active
attacks that solicit ratings for strategically selected items, and could thus
be used by a recommender system to pursue this hidden agenda. Our methods are
based on a novel usage of Bayesian matrix factorization in an active learning
setting. Evaluations on multiple datasets illustrate that such attacks are
indeed feasible and use significantly fewer rated items than static inference
methods. Importantly, they succeed without sacrificing the quality of
recommendations to users. | computer science |
40,971 | Learning Prices for Repeated Auctions with Strategic Buyers | cs.LG | Inspired by real-time ad exchanges for online display advertising, we
consider the problem of inferring a buyer's value distribution for a good when
the buyer is repeatedly interacting with a seller through a posted-price
mechanism. We model the buyer as a strategic agent, whose goal is to maximize
her long-term surplus, and we are interested in mechanisms that maximize the
seller's long-term revenue. We define the natural notion of strategic regret
--- the lost revenue as measured against a truthful (non-strategic) buyer. We
present seller algorithms that are no-(strategic)-regret when the buyer
discounts her future surplus --- i.e. the buyer prefers showing advertisements
to users sooner rather than later. We also give a lower bound on strategic
regret that increases as the buyer's discounting weakens and shows, in
particular, that any seller algorithm will suffer linear strategic regret if
there is no discounting. | computer science |
40,972 | Analysis of Distributed Stochastic Dual Coordinate Ascent | cs.DC | In \citep{Yangnips13}, the author presented distributed stochastic dual
coordinate ascent (DisDCA) algorithms for solving large-scale regularized loss
minimization. Extraordinary performances have been observed and reported for
the well-motivated updates, as referred to the practical updates, compared to
the naive updates. However, no serious analysis has been provided to understand
the updates and therefore the convergence rates. In the paper, we bridge the
gap by providing a theoretical analysis of the convergence rates of the
practical DisDCA algorithm. Our analysis helped by empirical studies has shown
that it could yield an exponential speed-up in the convergence by increasing
the number of dual updates at each iteration. This result justifies the
superior performances of the practical DisDCA as compared to the naive variant.
As a byproduct, our analysis also reveals the convergence behavior of the
one-communication DisDCA. | computer science |
40,973 | Bandits and Experts in Metric Spaces | cs.DS | In a multi-armed bandit problem, an online algorithm chooses from a set of
strategies in a sequence of trials so as to maximize the total payoff of the
chosen strategies. While the performance of bandit algorithms with a small
finite strategy set is quite well understood, bandit problems with large
strategy sets are still a topic of very active investigation, motivated by
practical applications such as online auctions and web advertisement. The goal
of such research is to identify broad and natural classes of strategy sets and
payoff functions which enable the design of efficient solutions.
In this work we study a very general setting for the multi-armed bandit
problem in which the strategies form a metric space, and the payoff function
satisfies a Lipschitz condition with respect to the metric. We refer to this
problem as the "Lipschitz MAB problem". We present a solution for the
multi-armed bandit problem in this setting. That is, for every metric space we
define an isometry invariant which bounds from below the performance of
Lipschitz MAB algorithms for this metric space, and we present an algorithm
which comes arbitrarily close to meeting this bound. Furthermore, our technique
gives even better results for benign payoff functions. We also address the
full-feedback ("best expert") version of the problem, where after every round
the payoffs from all arms are revealed. | computer science |
40,974 | A MapReduce based distributed SVM algorithm for binary classification | cs.LG | Although Support Vector Machine (SVM) algorithm has a high generalization
property to classify for unseen examples after training phase and it has small
loss value, the algorithm is not suitable for real-life classification and
regression problems. SVMs cannot solve hundreds of thousands examples in
training dataset. In previous studies on distributed machine learning
algorithms, SVM is trained over a costly and preconfigured computer
environment. In this research, we present a MapReduce based distributed
parallel SVM training algorithm for binary classification problems. This work
shows how to distribute optimization problem over cloud computing systems with
MapReduce technique. In the second step of this work, we used statistical
learning theory to find the predictive hypothesis that minimize our empirical
risks from hypothesis spaces that created with reduce function of MapReduce.
The results of this research are important for training of big datasets for SVM
algorithm based classification problems. We provided that iterative training of
split dataset with MapReduce technique; accuracy of the classifier function
will converge to global optimal classifier function's accuracy in finite
iteration size. The algorithm performance was measured on samples from letter
recognition and pen-based recognition of handwritten digits dataset. | computer science |
40,975 | Distributed k-means algorithm | cs.LG | In this paper we provide a fully distributed implementation of the k-means
clustering algorithm, intended for wireless sensor networks where each agent is
endowed with a possibly high-dimensional observation (e.g., position, humidity,
temperature, etc.) The proposed algorithm, by means of one-hop communication,
partitions the agents into measure-dependent groups that have small in-group
and large out-group "distances". Since the partitions may not have a relation
with the topology of the network--members of the same clusters may not be
spatially close--the algorithm is provided with a mechanism to compute the
clusters'centroids even when the clusters are disconnected in several
sub-clusters.The results of the proposed distributed algorithm coincide, in
terms of minimization of the objective function, with the centralized k-means
algorithm. Some numerical examples illustrate the capabilities of the proposed
solution. | computer science |
40,976 | Classification of Human Ventricular Arrhythmia in High Dimensional
Representation Spaces | cs.CE | We studied classification of human ECGs labelled as normal sinus rhythm,
ventricular fibrillation and ventricular tachycardia by means of support vector
machines in different representation spaces, using different observation
lengths. ECG waveform segments of duration 0.5-4 s, their Fourier magnitude
spectra, and lower dimensional projections of Fourier magnitude spectra were
used for classification. All considered representations were of much higher
dimension than in published studies. Classification accuracy improved with
segment duration up to 2 s, with 4 s providing little improvement. We found
that it is possible to discriminate between ventricular tachycardia and
ventricular fibrillation by the present approach with much shorter runs of ECG
(2 s, minimum 86% sensitivity per class) than previously imagined. Ensembles of
classifiers acting on 1 s segments taken over 5 s observation windows gave best
results, with sensitivities of detection for all classes exceeding 93%. | computer science |
40,977 | Manifold regularized kernel logistic regression for web image annotation | cs.LG | With the rapid advance of Internet technology and smart devices, users often
need to manage large amounts of multimedia information using smart devices,
such as personal image and video accessing and browsing. These requirements
heavily rely on the success of image (video) annotation, and thus large scale
image annotation through innovative machine learning methods has attracted
intensive attention in recent years. One representative work is support vector
machine (SVM). Although it works well in binary classification, SVM has a
non-smooth loss function and can not naturally cover multi-class case. In this
paper, we propose manifold regularized kernel logistic regression (KLR) for web
image annotation. Compared to SVM, KLR has the following advantages: (1) the
KLR has a smooth loss function; (2) the KLR produces an explicit estimate of
the probability instead of class label; and (3) the KLR can naturally be
generalized to the multi-class case. We carefully conduct experiments on MIR
FLICKR dataset and demonstrate the effectiveness of manifold regularized kernel
logistic regression for image annotation. | computer science |
40,978 | Co-Multistage of Multiple Classifiers for Imbalanced Multiclass Learning | cs.LG | In this work, we propose two stochastic architectural models (CMC and CMC-M)
with two layers of classifiers applicable to datasets with one and multiple
skewed classes. This distinction becomes important when the datasets have a
large number of classes. Therefore, we present a novel solution to imbalanced
multiclass learning with several skewed majority classes, which improves
minority classes identification. This fact is particularly important for text
classification tasks, such as event detection. Our models combined with
pre-processing sampling techniques improved the classification results on six
well-known datasets. Finally, we have also introduced a new metric SG-Mean to
overcome the multiplication by zero limitation of G-Mean. | computer science |
40,979 | Local algorithms for interactive clustering | cs.DS | We study the design of interactive clustering algorithms for data sets
satisfying natural stability assumptions. Our algorithms start with any initial
clustering and only make local changes in each step; both are desirable
features in many applications. We show that in this constrained setting one can
still design provably efficient algorithms that produce accurate clusterings.
We also show that our algorithms perform well on real-world data. | computer science |
40,980 | Greedy Column Subset Selection for Large-scale Data Sets | cs.DS | In today's information systems, the availability of massive amounts of data
necessitates the development of fast and accurate algorithms to summarize these
data and represent them in a succinct format. One crucial problem in big data
analytics is the selection of representative instances from large and
massively-distributed data, which is formally known as the Column Subset
Selection (CSS) problem. The solution to this problem enables data analysts to
understand the insights of the data and explore its hidden structure. The
selected instances can also be used for data preprocessing tasks such as
learning a low-dimensional embedding of the data points or computing a low-rank
approximation of the corresponding matrix. This paper presents a fast and
accurate greedy algorithm for large-scale column subset selection. The
algorithm minimizes an objective function which measures the reconstruction
error of the data matrix based on the subset of selected columns. The paper
first presents a centralized greedy algorithm for column subset selection which
depends on a novel recursive formula for calculating the reconstruction error
of the data matrix. The paper then presents a MapReduce algorithm which selects
a few representative columns from a matrix whose columns are massively
distributed across several commodity machines. The algorithm first learns a
concise representation of all columns using random projection, and it then
solves a generalized column subset selection problem at each machine in which a
subset of columns are selected from the sub-matrix on that machine such that
the reconstruction error of the concise representation is minimized. The paper
demonstrates the effectiveness and efficiency of the proposed algorithm through
an empirical evaluation on benchmark data sets. | computer science |
40,981 | Matrix recovery using Split Bregman | cs.NA | In this paper we address the problem of recovering a matrix, with inherent
low rank structure, from its lower dimensional projections. This problem is
frequently encountered in wide range of areas including pattern recognition,
wireless sensor networks, control systems, recommender systems, image/video
reconstruction etc. Both in theory and practice, the most optimal way to solve
the low rank matrix recovery problem is via nuclear norm minimization. In this
paper, we propose a Split Bregman algorithm for nuclear norm minimization. The
use of Bregman technique improves the convergence speed of our algorithm and
gives a higher success rate. Also, the accuracy of reconstruction is much
better even for cases where small number of linear measurements are available.
Our claim is supported by empirical results obtained using our algorithm and
its comparison to other existing methods for matrix recovery. The algorithms
are compared on the basis of NMSE, execution time and success rate for varying
ranks and sampling ratios. | computer science |
40,982 | Two Timescale Convergent Q-learning for Sleep--Scheduling in Wireless
Sensor Networks | cs.SY | In this paper, we consider an intrusion detection application for Wireless
Sensor Networks (WSNs). We study the problem of scheduling the sleep times of
the individual sensors to maximize the network lifetime while keeping the
tracking error to a minimum. We formulate this problem as a
partially-observable Markov decision process (POMDP) with continuous
state-action spaces, in a manner similar to (Fuemmeler and Veeravalli [2008]).
However, unlike their formulation, we consider infinite horizon discounted and
average cost objectives as performance criteria. For each criterion, we propose
a convergent on-policy Q-learning algorithm that operates on two timescales,
while employing function approximation to handle the curse of dimensionality
associated with the underlying POMDP. Our proposed algorithm incorporates a
policy gradient update using a one-simulation simultaneous perturbation
stochastic approximation (SPSA) estimate on the faster timescale, while the
Q-value parameter (arising from a linear function approximation for the
Q-values) is updated in an on-policy temporal difference (TD) algorithm-like
fashion on the slower timescale. The feature selection scheme employed in each
of our algorithms manages the energy and tracking components in a manner that
assists the search for the optimal sleep-scheduling policy. For the sake of
comparison, in both discounted and average settings, we also develop a function
approximation analogue of the Q-learning algorithm. This algorithm, unlike the
two-timescale variant, does not possess theoretical convergence guarantees.
Finally, we also adapt our algorithms to include a stochastic iterative
estimation scheme for the intruder's mobility model. Our simulation results on
a 2-dimensional network setting suggest that our algorithms result in better
tracking accuracy at the cost of only a few additional sensors, in comparison
to a recent prior work. | computer science |
40,983 | Nonparametric Inference For Density Modes | stat.ME | We derive nonparametric confidence intervals for the eigenvalues of the
Hessian at modes of a density estimate. This provides information about the
strength and shape of modes and can also be used as a significance test. We use
a data-splitting approach in which potential modes are identified using the
first half of the data and inference is done with the second half of the data.
To get valid confidence sets for the eigenvalues, we use a bootstrap based on
an elementary-symmetric-polynomial (ESP) transformation. This leads to valid
bootstrap confidence sets regardless of any multiplicities in the eigenvalues.
We also suggest a new method for bandwidth selection, namely, choosing the
bandwidth to maximize the number of significant modes. We show by example that
this method works well. Even when the true distribution is singular, and hence
does not have a density, (in which case cross validation chooses a zero
bandwidth), our method chooses a reasonable bandwidth. | computer science |
40,984 | Response-Based Approachability and its Application to Generalized
No-Regret Algorithms | cs.LG | Approachability theory, introduced by Blackwell (1956), provides fundamental
results on repeated games with vector-valued payoffs, and has been usefully
applied since in the theory of learning in games and to learning algorithms in
the online adversarial setup. Given a repeated game with vector payoffs, a
target set $S$ is approachable by a certain player (the agent) if he can ensure
that the average payoff vector converges to that set no matter what his
adversary opponent does. Blackwell provided two equivalent sets of conditions
for a convex set to be approachable. The first (primary) condition is a
geometric separation condition, while the second (dual) condition requires that
the set be {\em non-excludable}, namely that for every mixed action of the
opponent there exists a mixed action of the agent (a {\em response}) such that
the resulting payoff vector belongs to $S$. Existing approachability algorithms
rely on the primal condition and essentially require to compute at each stage a
projection direction from a given point to $S$. In this paper, we introduce an
approachability algorithm that relies on Blackwell's {\em dual} condition.
Thus, rather than projection, the algorithm relies on computation of the
response to a certain action of the opponent at each stage. The utility of the
proposed algorithm is demonstrated by applying it to certain generalizations of
the classical regret minimization problem, which include regret minimization
with side constraints and regret minimization for global cost functions. In
these problems, computation of the required projections is generally complex
but a response is readily obtainable. | computer science |
40,985 | Household Electricity Demand Forecasting -- Benchmarking
State-of-the-Art Methods | cs.LG | The increasing use of renewable energy sources with variable output, such as
solar photovoltaic and wind power generation, calls for Smart Grids that
effectively manage flexible loads and energy storage. The ability to forecast
consumption at different locations in distribution systems will be a key
capability of Smart Grids. The goal of this paper is to benchmark
state-of-the-art methods for forecasting electricity demand on the household
level across different granularities and time scales in an explorative way,
thereby revealing potential shortcomings and find promising directions for
future research in this area. We apply a number of forecasting methods
including ARIMA, neural networks, and exponential smoothening using several
strategies for training data selection, in particular day type and sliding
window based strategies. We consider forecasting horizons ranging between 15
minutes and 24 hours. Our evaluation is based on two data sets containing the
power usage of individual appliances at second time granularity collected over
the course of several months. The results indicate that forecasting accuracy
varies significantly depending on the choice of forecasting methods/strategy
and the parameter configuration. Measured by the Mean Absolute Percentage Error
(MAPE), the considered state-of-the-art forecasting methods rarely beat
corresponding persistence forecasts. Overall, we observed MAPEs in the range
between 5 and >100%. The average MAPE for the first data set was ~30%, while it
was ~85% for the other data set. These results show big room for improvement.
Based on the identified trends and experiences from our experiments, we
contribute a detailed discussion of promising future research. | computer science |
40,986 | Learning Two-input Linear and Nonlinear Analog Functions with a Simple
Chemical System | cs.LG | The current biochemical information processing systems behave in a
predetermined manner because all features are defined during the design phase.
To make such unconventional computing systems reusable and programmable for
biomedical applications, adaptation, learning, and self-modification based on
external stimuli would be highly desirable. However, so far, it has been too
challenging to implement these in wet chemistries. In this paper we extend the
chemical perceptron, a model previously proposed by the authors, to function as
an analog instead of a binary system. The new analog asymmetric signal
perceptron learns through feedback and supports Michaelis-Menten kinetics. The
results show that our perceptron is able to learn linear and nonlinear
(quadratic) functions of two inputs. To the best of our knowledge, it is the
first simulated chemical system capable of doing so. The small number of
species and reactions and their simplicity allows for a mapping to an actual
wet implementation using DNA-strand displacement or deoxyribozymes. Our results
are an important step toward actual biochemical systems that can learn and
adapt. | computer science |
40,987 | Cellular Automata and Its Applications in Bioinformatics: A Review | cs.CE | This paper aims at providing a survey on the problems that can be easily
addressed by cellular automata in bioinformatics. Some of the authors have
proposed algorithms for addressing some problems in bioinformatics but the
application of cellular automata in bioinformatics is a virgin field in
research. None of the researchers has tried to relate the major problems in
bioinformatics and find a common solution. Extensive literature surveys were
conducted. We have considered some papers in various journals and conferences
for conduct of our research. This paper provides intuition towards relating
various problems in bioinformatics logically and tries to attain a common frame
work for addressing the same. | computer science |
40,988 | piCholesky: Polynomial Interpolation of Multiple Cholesky Factors for
Efficient Approximate Cross-Validation | cs.LG | The dominant cost in solving least-square problems using Newton's method is
often that of factorizing the Hessian matrix over multiple values of the
regularization parameter ($\lambda$). We propose an efficient way to
interpolate the Cholesky factors of the Hessian matrix computed over a small
set of $\lambda$ values. This approximation enables us to optimally minimize
the hold-out error while incurring only a fraction of the cost compared to
exact cross-validation. We provide a formal error bound for our approximation
scheme and present solutions to a set of key implementation challenges that
allow our approach to maximally exploit the compute power of modern
architectures. We present a thorough empirical analysis over multiple datasets
to show the effectiveness of our approach. | computer science |
40,989 | AIS-MACA- Z: MACA based Clonal Classifier for Splicing Site, Protein
Coding and Promoter Region Identification in Eukaryotes | cs.CE | Bioinformatics incorporates information regarding biological data storage,
accessing mechanisms and presentation of characteristics within this data. Most
of the problems in bioinformatics and be addressed efficiently by computer
techniques. This paper aims at building a classifier based on Multiple
Attractor Cellular Automata (MACA) which uses fuzzy logic with version Z to
predict splicing site, protein coding and promoter region identification in
eukaryotes. It is strengthened with an artificial immune system technique
(AIS), Clonal algorithm for choosing rules of best fitness. The proposed
classifier can handle DNA sequences of lengths 54,108,162,252,354. This
classifier gives the exact boundaries of both protein and promoter regions with
an average accuracy of 90.6%. This classifier can predict the splicing site
with 97% accuracy. This classifier was tested with 1, 97,000 data components
which were taken from Fickett & Toung , EPDnew, and other sequences from a
renowned medical university. | computer science |
40,990 | Optimistic Risk Perception in the Temporal Difference error Explains the
Relation between Risk-taking, Gambling, Sensation-seeking and Low Fear | cs.LG | Understanding the affective, cognitive and behavioural processes involved in
risk taking is essential for treatment and for setting environmental conditions
to limit damage. Using Temporal Difference Reinforcement Learning (TDRL) we
computationally investigated the effect of optimism in risk perception in a
variety of goal-oriented tasks. Optimism in risk perception was studied by
varying the calculation of the Temporal Difference error, i.e., delta, in three
ways: realistic (stochastically correct), optimistic (assuming action control),
and overly optimistic (assuming outcome control). We show that for the gambling
task individuals with 'healthy' perception of control, i.e., action optimism,
do not develop gambling behaviour while individuals with 'unhealthy' perception
of control, i.e., outcome optimism, do. We show that high intensity of
sensations and low levels of fear co-occur due to optimistic risk perception.
We found that overly optimistic risk perception (outcome optimism) results in
risk taking and in persistent gambling behaviour in addition to high intensity
of sensations. We discuss how our results replicate risk-taking related
phenomena. | computer science |
40,991 | Towards the Safety of Human-in-the-Loop Robotics: Challenges and
Opportunities for Safety Assurance of Robotic Co-Workers | cs.RO | The success of the human-robot co-worker team in a flexible manufacturing
environment where robots learn from demonstration heavily relies on the correct
and safe operation of the robot. How this can be achieved is a challenge that
requires addressing both technical as well as human-centric research questions.
In this paper we discuss the state of the art in safety assurance, existing as
well as emerging standards in this area, and the need for new approaches to
safety assurance in the context of learning machines. We then focus on robotic
learning from demonstration, the challenges these techniques pose to safety
assurance and indicate opportunities to integrate safety considerations into
algorithms "by design". Finally, from a human-centric perspective, we stipulate
that, to achieve high levels of safety and ultimately trust, the robotic
co-worker must meet the innate expectations of the humans it works with. It is
our aim to stimulate a discussion focused on the safety aspects of
human-in-the-loop robotics, and to foster multidisciplinary collaboration to
address the research challenges identified. | computer science |
40,992 | Near-optimal sample compression for nearest neighbors | cs.LG | We present the first sample compression algorithm for nearest neighbors with
non-trivial performance guarantees. We complement these guarantees by
demonstrating almost matching hardness lower bounds, which show that our bound
is nearly optimal. Our result yields new insight into margin-based nearest
neighbor classification in metric spaces and allows us to significantly sharpen
and simplify existing bounds. Some encouraging empirical results are also
presented. | computer science |
40,993 | Complexity theoretic limitations on learning DNF's | cs.LG | Using the recently developed framework of [Daniely et al, 2014], we show that
under a natural assumption on the complexity of refuting random K-SAT formulas,
learning DNF formulas is hard. Furthermore, the same assumption implies the
hardness of learning intersections of $\omega(\log(n))$ halfspaces,
agnostically learning conjunctions, as well as virtually all (distribution
free) learning problems that were previously shown hard (under complexity
assumptions). | computer science |
40,994 | Methods for Ordinal Peer Grading | cs.LG | MOOCs have the potential to revolutionize higher education with their wide
outreach and accessibility, but they require instructors to come up with
scalable alternates to traditional student evaluation. Peer grading -- having
students assess each other -- is a promising approach to tackling the problem
of evaluation at scale, since the number of "graders" naturally scales with the
number of students. However, students are not trained in grading, which means
that one cannot expect the same level of grading skills as in traditional
settings. Drawing on broad evidence that ordinal feedback is easier to provide
and more reliable than cardinal feedback, it is therefore desirable to allow
peer graders to make ordinal statements (e.g. "project X is better than project
Y") and not require them to make cardinal statements (e.g. "project X is a
B-"). Thus, in this paper we study the problem of automatically inferring
student grades from ordinal peer feedback, as opposed to existing methods that
require cardinal peer feedback. We formulate the ordinal peer grading problem
as a type of rank aggregation problem, and explore several probabilistic models
under which to estimate student grades and grader reliability. We study the
applicability of these methods using peer grading data collected from a real
class -- with instructor and TA grades as a baseline -- and demonstrate the
efficacy of ordinal feedback techniques in comparison to existing cardinal peer
grading methods. Finally, we compare these peer-grading techniques to
traditional evaluation techniques. | computer science |
40,995 | Nearly Tight Bounds on $\ell_1$ Approximation of Self-Bounding Functions | cs.LG | We study the complexity of learning and approximation of self-bounding
functions over the uniform distribution on the Boolean hypercube ${0,1}^n$.
Informally, a function $f:{0,1}^n \rightarrow \mathbb{R}$ is self-bounding if
for every $x \in {0,1}^n$, $f(x)$ upper bounds the sum of all the $n$ marginal
decreases in the value of the function at $x$. Self-bounding functions include
such well-known classes of functions as submodular and fractionally-subadditive
(XOS) functions. They were introduced by Boucheron et al. in the context of
concentration of measure inequalities. Our main result is a nearly tight
$\ell_1$-approximation of self-bounding functions by low-degree juntas.
Specifically, all self-bounding functions can be $\epsilon$-approximated in
$\ell_1$ by a polynomial of degree $\tilde{O}(1/\epsilon)$ over
$2^{\tilde{O}(1/\epsilon)}$ variables. We show that both the degree and
junta-size are optimal up to logarithmic terms. Previous techniques considered
stronger $\ell_2$ approximation and proved nearly tight bounds of
$\Theta(1/\epsilon^{2})$ on the degree and $2^{\Theta(1/\epsilon^2)}$ on the
number of variables. Our bounds rely on the analysis of noise stability of
self-bounding functions together with a stronger connection between noise
stability and $\ell_1$ approximation by low-degree polynomials. This technique
can also be used to get tighter bounds on $\ell_1$ approximation by low-degree
polynomials and faster learning algorithm for halfspaces.
These results lead to improved and in several cases almost tight bounds for
PAC and agnostic learning of self-bounding functions relative to the uniform
distribution. In particular, assuming hardness of learning juntas, we show that
PAC and agnostic learning of self-bounding functions have complexity of
$n^{\tilde{\Theta}(1/\epsilon)}$. | computer science |
40,996 | Concurrent bandits and cognitive radio networks | cs.LG | We consider the problem of multiple users targeting the arms of a single
multi-armed stochastic bandit. The motivation for this problem comes from
cognitive radio networks, where selfish users need to coexist without any side
communication between them, implicit cooperation or common control. Even the
number of users may be unknown and can vary as users join or leave the network.
We propose an algorithm that combines an $\epsilon$-greedy learning rule with a
collision avoidance mechanism. We analyze its regret with respect to the
system-wide optimum and show that sub-linear regret can be obtained in this
setting. Experiments show dramatic improvement compared to other algorithms for
this setting. | computer science |
40,997 | A Comparison of Clustering and Missing Data Methods for Health Sciences | math.NA | In this paper, we compare and analyze clustering methods with missing data in
health behavior research. In particular, we propose and analyze the use of
compressive sensing's matrix completion along with spectral clustering to
cluster health related data. The empirical tests and real data results show
that these methods can outperform standard methods like LPA and FIML, in terms
of lower misclassification rates in clustering and better matrix completion
performance in missing data problems. According to our examination, a possible
explanation of these improvements is that spectral clustering takes advantage
of high data dimension and compressive sensing methods utilize the
near-to-low-rank property of health data. | computer science |
40,999 | A Multi Level Data Fusion Approach for Speaker Identification on
Telephone Speech | cs.SD | Several speaker identification systems are giving good performance with clean
speech but are affected by the degradations introduced by noisy audio
conditions. To deal with this problem, we investigate the use of complementary
information at different levels for computing a combined match score for the
unknown speaker. In this work, we observe the effect of two supervised machine
learning approaches including support vectors machines (SVM) and na\"ive bayes
(NB). We define two feature vector sets based on mel frequency cepstral
coefficients (MFCC) and relative spectral perceptual linear predictive
coefficients (RASTA-PLP). Each feature is modeled using the Gaussian Mixture
Model (GMM). Several ways of combining these information sources give
significant improvements in a text-independent speaker identification task
using a very large telephone degraded NTIMIT database. | computer science |
Subsets and Splits