Unnamed: 0
int64 0
41k
| title
stringlengths 4
274
| category
stringlengths 5
18
| summary
stringlengths 22
3.66k
| theme
stringclasses 8
values |
---|---|---|---|---|
40,710 | An Initial Seed Selection Algorithm for K-means Clustering of
Georeferenced Data to Improve Replicability of Cluster Assignments for
Mapping Application | cs.LG | K-means is one of the most widely used clustering algorithms in various
disciplines, especially for large datasets. However the method is known to be
highly sensitive to initial seed selection of cluster centers. K-means++ has
been proposed to overcome this problem and has been shown to have better
accuracy and computational efficiency than k-means. In many clustering problems
though -such as when classifying georeferenced data for mapping applications-
standardization of clustering methodology, specifically, the ability to arrive
at the same cluster assignment for every run of the method i.e. replicability
of the methodology, may be of greater significance than any perceived measure
of accuracy, especially when the solution is known to be non-unique, as in the
case of k-means clustering. Here we propose a simple initial seed selection
algorithm for k-means clustering along one attribute that draws initial cluster
boundaries along the 'deepest valleys' or greatest gaps in dataset. Thus, it
incorporates a measure to maximize distance between consecutive cluster centers
which augments the conventional k-means optimization for minimum distance
between cluster center and cluster members. Unlike existing initialization
methods, no additional parameters or degrees of freedom are introduced to the
clustering algorithm. This improves the replicability of cluster assignments by
as much as 100% over k-means and k-means++, virtually reducing the variance
over different runs to zero, without introducing any additional parameters to
the clustering process. Further, the proposed method is more computationally
efficient than k-means++ and in some cases, more accurate. | computer science |
40,711 | CT-Mapper: Mapping Sparse Multimodal Cellular Trajectories using a
Multilayer Transportation Network | cs.SI | Mobile phone data have recently become an attractive source of information
about mobility behavior. Since cell phone data can be captured in a passive way
for a large user population, they can be harnessed to collect well-sampled
mobility information. In this paper, we propose CT-Mapper, an unsupervised
algorithm that enables the mapping of mobile phone traces over a multimodal
transport network. One of the main strengths of CT-Mapper is its capability to
map noisy sparse cellular multimodal trajectories over a multilayer
transportation network where the layers have different physical properties and
not only to map trajectories associated with a single layer. Such a network is
modeled by a large multilayer graph in which the nodes correspond to
metro/train stations or road intersections and edges correspond to connections
between them. The mapping problem is modeled by an unsupervised HMM where the
observations correspond to sparse user mobile trajectories and the hidden
states to the multilayer graph nodes. The HMM is unsupervised as the transition
and emission probabilities are inferred using respectively the physical
transportation properties and the information on the spatial coverage of
antenna base stations. To evaluate CT-Mapper we collected cellular traces with
their corresponding GPS trajectories for a group of volunteer users in Paris
and vicinity (France). We show that CT-Mapper is able to accurately retrieve
the real cell phone user paths despite the sparsity of the observed trace
trajectories. Furthermore our transition probability model is up to 20% more
accurate than other naive models. | computer science |
40,712 | Observing and Recommending from a Social Web with Biases | cs.DB | The research question this report addresses is: how, and to what extent,
those directly involved with the design, development and employment of a
specific black box algorithm can be certain that it is not unlawfully
discriminating (directly and/or indirectly) against particular persons with
protected characteristics (e.g. gender, race and ethnicity)? | computer science |
40,713 | Unbiased Comparative Evaluation of Ranking Functions | cs.IR | Eliciting relevance judgments for ranking evaluation is labor-intensive and
costly, motivating careful selection of which documents to judge. Unlike
traditional approaches that make this selection deterministically,
probabilistic sampling has shown intriguing promise since it enables the design
of estimators that are provably unbiased even when reusing data with missing
judgments. In this paper, we first unify and extend these sampling approaches
by viewing the evaluation problem as a Monte Carlo estimation task that applies
to a large number of common IR metrics. Drawing on the theoretical clarity that
this view offers, we tackle three practical evaluation scenarios: comparing two
systems, comparing $k$ systems against a baseline, and ranking $k$ systems. For
each scenario, we derive an estimator and a variance-optimizing sampling
distribution while retaining the strengths of sampling-based evaluation,
including unbiasedness, reusability despite missing data, and ease of use in
practice. In addition to the theoretical contribution, we empirically evaluate
our methods against previously used sampling heuristics and find that they
generally cut the number of required relevance judgments at least in half. | computer science |
40,714 | Towards Reduced Reference Parametric Models for Estimating Audiovisual
Quality in Multimedia Services | cs.MM | We have developed reduced reference parametric models for estimating
perceived quality in audiovisual multimedia services. We have created 144
unique configurations for audiovisual content including various application and
network parameters such as bitrates and distortions in terms of bandwidth,
packet loss rate and jitter. To generate the data needed for model training and
validation we have tasked 24 subjects, in a controlled environment, to rate the
overall audiovisual quality on the absolute category rating (ACR) 5-level
quality scale. We have developed models using Random Forest and Neural Network
based machine learning methods in order to estimate Mean Opinion Scores (MOS)
values. We have used information retrieved from the packet headers and side
information provided as network parameters for model training. Random Forest
based models have performed better in terms of Root Mean Square Error (RMSE)
and Pearson correlation coefficient. The side information proved to be very
effective in developing the model. We have found that, while the model
performance might be improved by replacing the side information with more
accurate bit stream level measurements, they are performing well in estimating
perceived quality in audiovisual multimedia services. | computer science |
40,715 | Convolutional Neural Networks For Automatic State-Time Feature
Extraction in Reinforcement Learning Applied to Residential Load Control | cs.LG | Direct load control of a heterogeneous cluster of residential demand
flexibility sources is a high-dimensional control problem with partial
observability. This work proposes a novel approach that uses a convolutional
neural network to extract hidden state-time features to mitigate the curse of
partial observability. More specific, a convolutional neural network is used as
a function approximator to estimate the state-action value function or
Q-function in the supervised learning step of fitted Q-iteration. The approach
is evaluated in a qualitative simulation, comprising a cluster of
thermostatically controlled loads that only share their air temperature, whilst
their envelope temperature remains hidden. The simulation results show that the
presented approach is able to capture the underlying hidden features and
successfully reduce the electricity cost the cluster. | computer science |
40,716 | Detection of epileptic seizure in EEG signals using linear least squares
preprocessing | cs.LG | An epileptic seizure is a transient event of abnormal excessive neuronal
discharge in the brain. This unwanted event can be obstructed by detection of
electrical changes in the brain that happen before the seizure takes place. The
automatic detection of seizures is necessary since the visual screening of EEG
recordings is a time consuming task and requires experts to improve the
diagnosis. Four linear least squares-based preprocessing models are proposed to
extract key features of an EEG signal in order to detect seizures. The first
two models are newly developed. The original signal (EEG) is approximated by a
sinusoidal curve. Its amplitude is formed by a polynomial function and compared
with the pre developed spline function.Different statistical measures namely
classification accuracy, true positive and negative rates, false positive and
negative rates and precision are utilized to assess the performance of the
proposed models. These metrics are derived from confusion matrices obtained
from classifiers. Different classifiers are used over the original dataset and
the set of extracted features. The proposed models significantly reduce the
dimension of the classification problem and the computational time while the
classification accuracy is improved in most cases. The first and third models
are promising feature extraction methods. Logistic, LazyIB1, LazyIB5 and J48
are the best classifiers. Their true positive and negative rates are $1$ while
false positive and negative rates are zero and the corresponding precision
values are $1$. Numerical results suggest that these models are robust and
efficient for detecting epileptic seizure. | computer science |
40,717 | A movie genre prediction based on Multivariate Bernoulli model and genre
correlations | cs.IR | Movie ratings play an important role both in determining the likelihood of a
potential viewer to watch the movie and in reflecting the current viewer
satisfaction with the movie. They are available in several sources like the
television guide, best-selling reference books, newspaper columns, and
television programs. Furthermore, movie ratings are crucial for recommendation
engines that track the behavior of all users and utilize the information to
suggest items they might like. Movie ratings in most cases, thus, provide
information that might be more important than movie feature-based data. It is
intuitively appealing that information about the viewing preferences in movie
genres is sufficient for predicting a genre of an unlabeled movie. In order to
predict movie genres, we treat ratings as a feature vector, apply the Bernoulli
event model to estimate the likelihood of a movies given genre, and evaluate
the posterior probability of the genre of a given movie using the Bayes rule.
The goal of the proposed technique is to efficiently use the movie ratings for
the task of predicting movie genres. In our approach we attempted to answer the
question: "Given the set of users who watched a movie, is it possible to
predict the genre of a movie based on its ratings?" Our simulation results with
MovieLens 100k data demonstrated the efficiency and accuracy of our proposed
technique, achieving 59% prediction rate for exact prediction and 69% when
including correlated genres. | computer science |
40,718 | Music transcription modelling and composition using deep learning | cs.SD | We apply deep learning methods, specifically long short-term memory (LSTM)
networks, to music transcription modelling and composition. We build and train
LSTM networks using approximately 23,000 music transcriptions expressed with a
high-level vocabulary (ABC notation), and use them to generate new
transcriptions. Our practical aim is to create music transcription models
useful in particular contexts of music composition. We present results from
three perspectives: 1) at the population level, comparing descriptive
statistics of the set of training transcriptions and generated transcriptions;
2) at the individual level, examining how a generated transcription reflects
the conventions of a music practice in the training transcriptions (Celtic
folk); 3) at the application level, using the system for idea generation in
music composition. We make our datasets, software and sound examples open and
available: \url{https://github.com/IraKorshunova/folk-rnn}. | computer science |
40,719 | Joint Sound Source Separation and Speaker Recognition | cs.SD | Non-negative Matrix Factorization (NMF) has already been applied to learn
speaker characterizations from single or non-simultaneous speech for speaker
recognition applications. It is also known for its good performance in (blind)
source separation for simultaneous speech. This paper explains how NMF can be
used to jointly solve the two problems in a multichannel speaker recognizer for
simultaneous speech. It is shown how state-of-the-art multichannel NMF for
blind source separation can be easily extended to incorporate speaker
recognition. Experiments on the CHiME corpus show that this method outperforms
the sequential approach of first applying source separation, followed by
speaker recognition that uses state-of-the-art i-vector techniques. | computer science |
40,720 | A game-theoretic version of Oakes' example for randomized forecasting | cs.LG | Using the game-theoretic framework for probability, Vovk and Shafer. have
shown that it is always possible, using randomization, to make sequential
probability forecasts that pass any countable set of well-behaved statistical
tests. This result generalizes work by other authors, who consider only tests
of calbration.
We complement this result with a lower bound. We show that Vovk and Shafer's
result is valid only when the forecasts are computed with unrestrictedly
increasing degree of accuracy.
When some level of discreteness is fixed, we present a game-theoretic
generalization of Oakes' example for randomized forecasting that is a test
failing any given method of deferministic forecasting; originally, this example
was presented for deterministic calibration. | computer science |
40,721 | Craniofacial reconstruction as a prediction problem using a Latent Root
Regression model | cs.LG | In this paper, we present a computer-assisted method for facial
reconstruction. This method provides an estimation of the facial shape
associated with unidentified skeletal remains. Current computer-assisted
methods using a statistical framework rely on a common set of extracted points
located on the bone and soft-tissue surfaces. Most of the facial reconstruction
methods then consist of predicting the position of the soft-tissue surface
points, when the positions of the bone surface points are known. We propose to
use Latent Root Regression for prediction. The results obtained are then
compared to those given by Principal Components Analysis linear models. In
conjunction, we have evaluated the influence of the number of skull landmarks
used. Anatomical skull landmarks are completed iteratively by points located
upon geodesics which link these anatomical landmarks, thus enabling us to
artificially increase the number of skull points. Facial points are obtained
using a mesh-matching algorithm between a common reference mesh and individual
soft-tissue surface meshes. The proposed method is validated in term of
accuracy, based on a leave-one-out cross-validation test applied to a
homogeneous database. Accuracy measures are obtained by computing the distance
between the original face surface and its reconstruction. Finally, these
results are discussed referring to current computer-assisted reconstruction
facial techniques. | computer science |
40,722 | Near-optimal Coresets For Least-Squares Regression | cs.DS | We study (constrained) least-squares regression as well as multiple response
least-squares regression and ask the question of whether a subset of the data,
a coreset, suffices to compute a good approximate solution to the regression.
We give deterministic, low order polynomial-time algorithms to construct such
coresets with approximation guarantees, together with lower bounds indicating
that there is not much room for improvement upon our results. | computer science |
40,723 | Finding a most biased coin with fewest flips | cs.DS | We study the problem of learning a most biased coin among a set of coins by
tossing the coins adaptively. The goal is to minimize the number of tosses
until we identify a coin i* whose posterior probability of being most biased is
at least 1-delta for a given delta. Under a particular probabilistic model, we
give an optimal algorithm, i.e., an algorithm that minimizes the expected
number of future tosses. The problem is closely related to finding the best arm
in the multi-armed bandit problem using adaptive strategies. Our algorithm
employs an optimal adaptive strategy -- a strategy that performs the best
possible action at each step after observing the outcomes of all previous coin
tosses. Consequently, our algorithm is also optimal for any starting history of
outcomes. To our knowledge, this is the first algorithm that employs an optimal
adaptive strategy under a Bayesian setting for this problem. Our proof of
optimality employs tools from the field of Markov games. | computer science |
40,724 | Guaranteed clustering and biclustering via semidefinite programming | math.OC | Identifying clusters of similar objects in data plays a significant role in a
wide range of applications. As a model problem for clustering, we consider the
densest k-disjoint-clique problem, whose goal is to identify the collection of
k disjoint cliques of a given weighted complete graph maximizing the sum of the
densities of the complete subgraphs induced by these cliques. In this paper, we
establish conditions ensuring exact recovery of the densest k cliques of a
given graph from the optimal solution of a particular semidefinite program. In
particular, the semidefinite relaxation is exact for input graphs corresponding
to data consisting of k large, distinct clusters and a smaller number of
outliers. This approach also yields a semidefinite relaxation for the
biclustering problem with similar recovery guarantees. Given a set of objects
and a set of features exhibited by these objects, biclustering seeks to
simultaneously group the objects and features according to their expression
levels. This problem may be posed as partitioning the nodes of a weighted
bipartite complete graph such that the sum of the densities of the resulting
bipartite complete subgraphs is maximized. As in our analysis of the densest
k-disjoint-clique problem, we show that the correct partition of the objects
and features can be recovered from the optimal solution of a semidefinite
program in the case that the given data consists of several disjoint sets of
objects exhibiting similar features. Empirical evidence from numerical
experiments supporting these theoretical guarantees is also provided. | computer science |
40,725 | The best of both worlds: stochastic and adversarial bandits | cs.LG | We present a new bandit algorithm, SAO (Stochastic and Adversarial Optimal),
whose regret is, essentially, optimal both for adversarial rewards and for
stochastic rewards. Specifically, SAO combines the square-root worst-case
regret of Exp3 (Auer et al., SIAM J. on Computing 2002) and the
(poly)logarithmic regret of UCB1 (Auer et al., Machine Learning 2002) for
stochastic rewards. Adversarial rewards and stochastic rewards are the two main
settings in the literature on (non-Bayesian) multi-armed bandits. Prior work on
multi-armed bandits treats them separately, and does not attempt to jointly
optimize for both. Our result falls into a general theme of achieving good
worst-case performance while also taking advantage of "nice" problem instances,
an important issue in the design of algorithms with partially known inputs. | computer science |
40,726 | Min Max Generalization for Two-stage Deterministic Batch Mode
Reinforcement Learning: Relaxation Schemes | cs.SY | We study the minmax optimization problem introduced in [22] for computing
policies for batch mode reinforcement learning in a deterministic setting.
First, we show that this problem is NP-hard. In the two-stage case, we provide
two relaxation schemes. The first relaxation scheme works by dropping some
constraints in order to obtain a problem that is solvable in polynomial time.
The second relaxation scheme, based on a Lagrangian relaxation where all
constraints are dualized, leads to a conic quadratic programming problem. We
also theoretically prove and empirically illustrate that both relaxation
schemes provide better results than those given in [22]. | computer science |
40,727 | Nonlinear Laplacian spectral analysis: Capturing intermittent and
low-frequency spatiotemporal patterns in high-dimensional data | cs.LG | We present a technique for spatiotemporal data analysis called nonlinear
Laplacian spectral analysis (NLSA), which generalizes singular spectrum
analysis (SSA) to take into account the nonlinear manifold structure of complex
data sets. The key principle underlying NLSA is that the functions used to
represent temporal patterns should exhibit a degree of smoothness on the
nonlinear data manifold M; a constraint absent from classical SSA. NLSA
enforces such a notion of smoothness by requiring that temporal patterns belong
in low-dimensional Hilbert spaces V_l spanned by the leading l Laplace-Beltrami
eigenfunctions on M. These eigenfunctions can be evaluated efficiently in high
ambient-space dimensions using sparse graph-theoretic algorithms. Moreover,
they provide orthonormal bases to expand a family of linear maps, whose
singular value decomposition leads to sets of spatiotemporal patterns at
progressively finer resolution on the data manifold. The Riemannian measure of
M and an adaptive graph kernel width enhances the capability of NLSA to detect
important nonlinear processes, including intermittency and rare events. The
minimum dimension of V_l required to capture these features while avoiding
overfitting is estimated here using spectral entropy criteria. | computer science |
40,728 | A Stochastic Gradient Method with an Exponential Convergence Rate for
Finite Training Sets | math.OC | We propose a new stochastic gradient method for optimizing the sum of a
finite set of smooth functions, where the sum is strongly convex. While
standard stochastic gradient methods converge at sublinear rates for this
problem, the proposed method incorporates a memory of previous gradient values
in order to achieve a linear convergence rate. In a machine learning context,
numerical experiments indicate that the new algorithm can dramatically
outperform standard algorithms, both in terms of optimizing the training error
and reducing the test error quickly. | computer science |
40,729 | A Route Confidence Evaluation Method for Reliable Hierarchical Text
Categorization | cs.IR | Hierarchical Text Categorization (HTC) is becoming increasingly important
with the rapidly growing amount of text data available in the World Wide Web.
Among the different strategies proposed to cope with HTC, the Local Classifier
per Node (LCN) approach attains good performance by mirroring the underlying
class hierarchy while enforcing a top-down strategy in the testing step.
However, the problem of embedding hierarchical information (parent-child
relationship) to improve the performance of HTC systems still remains open. A
confidence evaluation method for a selected route in the hierarchy is proposed
to evaluate the reliability of the final candidate labels in an HTC system. In
order to take into account the information embedded in the hierarchy, weight
factors are used to take into account the importance of each level. An
acceptance/rejection strategy in the top-down decision making process is
proposed, which improves the overall categorization accuracy by rejecting a few
percentage of samples, i.e., those with low reliability score. Experimental
results on the Reuters benchmark dataset (RCV1- v2) confirm the effectiveness
of the proposed method, compared to other state-of-the art HTC methods. | computer science |
40,730 | A Machine Learning Approach For Opinion Holder Extraction In Arabic
Language | cs.IR | Opinion mining aims at extracting useful subjective information from reliable
amounts of text. Opinion mining holder recognition is a task that has not been
considered yet in Arabic Language. This task essentially requires deep
understanding of clauses structures. Unfortunately, the lack of a robust,
publicly available, Arabic parser further complicates the research. This paper
presents a leading research for the opinion holder extraction in Arabic news
independent from any lexical parsers. We investigate constructing a
comprehensive feature set to compensate the lack of parsing structural
outcomes. The proposed feature set is tuned from English previous works coupled
with our proposed semantic field and named entities features. Our feature
analysis is based on Conditional Random Fields (CRF) and semi-supervised
pattern recognition techniques. Different research models are evaluated via
cross-validation experiments achieving 54.03 F-measure. We publicly release our
own research outcome corpus and lexicon for opinion mining community to
encourage further research. | computer science |
40,731 | Memory-Efficient Topic Modeling | cs.LG | As one of the simplest probabilistic topic modeling techniques, latent
Dirichlet allocation (LDA) has found many important applications in text
mining, computer vision and computational biology. Recent training algorithms
for LDA can be interpreted within a unified message passing framework. However,
message passing requires storing previous messages with a large amount of
memory space, increasing linearly with the number of documents or the number of
topics. Therefore, the high memory usage is often a major problem for topic
modeling of massive corpora containing a large number of topics. To reduce the
space complexity, we propose a novel algorithm without storing previous
messages for training LDA: tiny belief propagation (TBP). The basic idea of TBP
relates the message passing algorithms with the non-negative matrix
factorization (NMF) algorithms, which absorb the message updating into the
message passing process, and thus avoid storing previous messages. Experimental
results on four large data sets confirm that TBP performs comparably well or
even better than current state-of-the-art training algorithms for LDA but with
a much less memory consumption. TBP can do topic modeling when massive corpora
cannot fit in the computer memory, for example, extracting thematic topics from
7 GB PUBMED corpora on a common desktop computer with 2GB memory. | computer science |
40,732 | PRISMA: PRoximal Iterative SMoothing Algorithm | math.OC | Motivated by learning problems including max-norm regularized matrix
completion and clustering, robust PCA and sparse inverse covariance selection,
we propose a novel optimization algorithm for minimizing a convex objective
which decomposes into three parts: a smooth part, a simple non-smooth Lipschitz
part, and a simple non-smooth non-Lipschitz part. We use a time variant
smoothing strategy that allows us to obtain a guarantee that does not depend on
knowing in advance the total number of iterations nor a bound on the domain. | computer science |
40,733 | Sparse Distributed Learning Based on Diffusion Adaptation | cs.LG | This article proposes diffusion LMS strategies for distributed estimation
over adaptive networks that are able to exploit sparsity in the underlying
system model. The approach relies on convex regularization, common in
compressive sensing, to enhance the detection of sparsity via a diffusive
process over the network. The resulting algorithms endow networks with learning
abilities and allow them to learn the sparse structure from the incoming data
in real-time, and also to track variations in the sparsity of the model. We
provide convergence and mean-square performance analysis of the proposed method
and show under what conditions it outperforms the unregularized diffusion
version. We also show how to adaptively select the regularization parameter.
Simulation results illustrate the advantage of the proposed filters for sparse
data recovery. | computer science |
40,734 | Improved Spectral-Norm Bounds for Clustering | cs.LG | Aiming to unify known results about clustering mixtures of distributions
under separation conditions, Kumar and Kannan[2010] introduced a deterministic
condition for clustering datasets. They showed that this single deterministic
condition encompasses many previously studied clustering assumptions. More
specifically, their proximity condition requires that in the target
$k$-clustering, the projection of a point $x$ onto the line joining its cluster
center $\mu$ and some other center $\mu'$, is a large additive factor closer to
$\mu$ than to $\mu'$. This additive factor can be roughly described as $k$
times the spectral norm of the matrix representing the differences between the
given (known) dataset and the means of the (unknown) target clustering.
Clearly, the proximity condition implies center separation -- the distance
between any two centers must be as large as the above mentioned bound.
In this paper we improve upon the work of Kumar and Kannan along several
axes. First, we weaken the center separation bound by a factor of $\sqrt{k}$,
and secondly we weaken the proximity condition by a factor of $k$. Using these
weaker bounds we still achieve the same guarantees when all points satisfy the
proximity condition. We also achieve better guarantees when only
$(1-\epsilon)$-fraction of the points satisfy the weaker proximity condition.
The bulk of our analysis relies only on center separation under which one can
produce a clustering which (i) has low error, (ii) has low $k$-means cost, and
(iii) has centers very close to the target centers.
Our improved separation condition allows us to match the results of the
Planted Partition Model of McSherry[2001], improve upon the results of
Ostrovsky et al[2006], and improve separation results for mixture of Gaussian
models in a particular setting. | computer science |
40,735 | A Novel Approach for Protein Structure Prediction | cs.LG | The idea of this project is to study the protein structure and sequence
relationship using the hidden markov model and artificial neural network. In
this context we have assumed two hidden markov models. In first model we have
taken protein secondary structures as hidden and protein sequences as observed.
In second model we have taken protein sequences as hidden and protein
structures as observed. The efficiencies for both the hidden markov models have
been calculated. The results show that the efficiencies of first model is
greater that the second one .These efficiencies are cross validated using
artificial neural network. This signifies the importance of protein secondary
structures as the main hidden controlling factors due to which we observe a
particular amino acid sequence. This also signifies that protein secondary
structure is more conserved in comparison to amino acid sequence. | computer science |
40,736 | Unsupervised adaptation of brain machine interface decoders | cs.LG | The performance of neural decoders can degrade over time due to
nonstationarities in the relationship between neuronal activity and behavior.
In this case, brain-machine interfaces (BMI) require adaptation of their
decoders to maintain high performance across time. One way to achieve this is
by use of periodical calibration phases, during which the BMI system (or an
external human demonstrator) instructs the user to perform certain movements or
behaviors. This approach has two disadvantages: (i) calibration phases
interrupt the autonomous operation of the BMI and (ii) between two calibration
phases the BMI performance might not be stable but continuously decrease. A
better alternative would be that the BMI decoder is able to continuously adapt
in an unsupervised manner during autonomous BMI operation, i.e. without knowing
the movement intentions of the user.
In the present article, we present an efficient method for such unsupervised
training of BMI systems for continuous movement control. The proposed method
utilizes a cost function derived from neuronal recordings, which guides a
learning algorithm to evaluate the decoding parameters. We verify the
performance of our adaptive method by simulating a BMI user with an optimal
feedback control model and its interaction with our adaptive BMI decoder. The
simulation results show that the cost function and the algorithm yield fast and
precise trajectories towards targets at random orientations on a 2-dimensional
computer screen. For initially unknown and non-stationary tuning parameters,
our unsupervised method is still able to generate precise trajectories and to
keep its performance stable in the long term. The algorithm can optionally work
also with neuronal error signals instead or in conjunction with the proposed
unsupervised adaptation. | computer science |
40,737 | ConeRANK: Ranking as Learning Generalized Inequalities | cs.LG | We propose a new data mining approach in ranking documents based on the
concept of cone-based generalized inequalities between vectors. A partial
ordering between two vectors is made with respect to a proper cone and thus
learning the preferences is formulated as learning proper cones. A pairwise
learning-to-rank algorithm (ConeRank) is proposed to learn a non-negative
subspace, formulated as a polyhedral cone, over document-pair differences. The
algorithm is regularized by controlling the `volume' of the cone. The
experimental studies on the latest and largest ranking dataset LETOR 4.0 shows
that ConeRank is competitive against other recent ranking approaches. | computer science |
40,738 | Parsimonious Mahalanobis Kernel for the Classification of High
Dimensional Data | cs.NA | The classification of high dimensional data with kernel methods is considered
in this article. Exploit- ing the emptiness property of high dimensional
spaces, a kernel based on the Mahalanobis distance is proposed. The computation
of the Mahalanobis distance requires the inversion of a covariance matrix. In
high dimensional spaces, the estimated covariance matrix is ill-conditioned and
its inversion is unstable or impossible. Using a parsimonious statistical
model, namely the High Dimensional Discriminant Analysis model, the specific
signal and noise subspaces are estimated for each considered class making the
inverse of the class specific covariance matrix explicit and stable, leading to
the definition of a parsimonious Mahalanobis kernel. A SVM based framework is
used for selecting the hyperparameters of the parsimonious Mahalanobis kernel
by optimizing the so-called radius-margin bound. Experimental results on three
high dimensional data sets show that the proposed kernel is suitable for
classifying high dimensional data, providing better classification accuracies
than the conventional Gaussian kernel. | computer science |
40,739 | Projection-free Online Learning | cs.LG | The computational bottleneck in applying online learning to massive data sets
is usually the projection step. We present efficient online learning algorithms
that eschew projections in favor of much more efficient linear optimization
steps using the Frank-Wolfe technique. We obtain a range of regret bounds for
online convex optimization, with better bounds for specific cases such as
stochastic online smooth convex optimization.
Besides the computational advantage, other desirable features of our
algorithms are that they are parameter-free in the stochastic case and produce
sparse decisions. We apply our algorithms to computationally intensive
applications of collaborative filtering, and show the theoretical improvements
to be clearly visible on standard datasets. | computer science |
40,740 | Provable ICA with Unknown Gaussian Noise, and Implications for Gaussian
Mixtures and Autoencoders | cs.LG | We present a new algorithm for Independent Component Analysis (ICA) which has
provable performance guarantees. In particular, suppose we are given samples of
the form $y = Ax + \eta$ where $A$ is an unknown $n \times n$ matrix and $x$ is
a random variable whose components are independent and have a fourth moment
strictly less than that of a standard Gaussian random variable and $\eta$ is an
$n$-dimensional Gaussian random variable with unknown covariance $\Sigma$: We
give an algorithm that provable recovers $A$ and $\Sigma$ up to an additive
$\epsilon$ and whose running time and sample complexity are polynomial in $n$
and $1 / \epsilon$. To accomplish this, we introduce a novel "quasi-whitening"
step that may be useful in other contexts in which the covariance of Gaussian
noise is not known in advance. We also give a general framework for finding all
local optima of a function (given an oracle for approximately finding just one)
and this is a crucial step in our algorithm, one that has been overlooked in
previous attempts, and allows us to control the accumulation of error when we
find the columns of $A$ one by one via local search. | computer science |
40,741 | Discrete Elastic Inner Vector Spaces with Application in Time Series and
Sequence Mining | cs.LG | This paper proposes a framework dedicated to the construction of what we call
discrete elastic inner product allowing one to embed sets of non-uniformly
sampled multivariate time series or sequences of varying lengths into inner
product space structures. This framework is based on a recursive definition
that covers the case of multiple embedded time elastic dimensions. We prove
that such inner products exist in our general framework and show how a simple
instance of this inner product class operates on some prospective applications,
while generalizing the Euclidean inner product. Classification experimentations
on time series and symbolic sequences datasets demonstrate the benefits that we
can expect by embedding time series or sequences into elastic inner spaces
rather than into classical Euclidean spaces. These experiments show good
accuracy when compared to the euclidean distance or even dynamic programming
algorithms while maintaining a linear algorithmic complexity at exploitation
stage, although a quadratic indexing phase beforehand is required. | computer science |
40,742 | Sequential Document Representations and Simplicial Curves | cs.IR | The popular bag of words assumption represents a document as a histogram of
word occurrences. While computationally efficient, such a representation is
unable to maintain any sequential information. We present a continuous and
differentiable sequential document representation that goes beyond the bag of
words assumption, and yet is efficient and effective. This representation
employs smooth curves in the multinomial simplex to account for sequential
information. We discuss the representation and its geometric properties and
demonstrate its applicability for the task of text classification. | computer science |
40,743 | Distributed Adaptive Networks: A Graphical Evolutionary Game-Theoretic
View | cs.GT | Distributed adaptive filtering has been considered as an effective approach
for data processing and estimation over distributed networks. Most existing
distributed adaptive filtering algorithms focus on designing different
information diffusion rules, regardless of the nature evolutionary
characteristic of a distributed network. In this paper, we study the adaptive
network from the game theoretic perspective and formulate the distributed
adaptive filtering problem as a graphical evolutionary game. With the proposed
formulation, the nodes in the network are regarded as players and the local
combiner of estimation information from different neighbors is regarded as
different strategies selection. We show that this graphical evolutionary game
framework is very general and can unify the existing adaptive network
algorithms. Based on this framework, as examples, we further propose two
error-aware adaptive filtering algorithms. Moreover, we use graphical
evolutionary game theory to analyze the information diffusion process over the
adaptive networks and evolutionarily stable strategy of the system. Finally,
simulation results are shown to verify the effectiveness of our analysis and
proposed methods. | computer science |
40,744 | Learning Mixtures of Arbitrary Distributions over Large Discrete Domains | cs.LG | We give an algorithm for learning a mixture of {\em unstructured}
distributions. This problem arises in various unsupervised learning scenarios,
for example in learning {\em topic models} from a corpus of documents spanning
several topics. We show how to learn the constituents of a mixture of $k$
arbitrary distributions over a large discrete domain $[n]=\{1,2,\dots,n\}$ and
the mixture weights, using $O(n\polylog n)$ samples. (In the topic-model
learning setting, the mixture constituents correspond to the topic
distributions.) This task is information-theoretically impossible for $k>1$
under the usual sampling process from a mixture distribution. However, there
are situations (such as the above-mentioned topic model case) in which each
sample point consists of several observations from the same mixture
constituent. This number of observations, which we call the {\em "sampling
aperture"}, is a crucial parameter of the problem. We obtain the {\em first}
bounds for this mixture-learning problem {\em without imposing any assumptions
on the mixture constituents.} We show that efficient learning is possible
exactly at the information-theoretically least-possible aperture of $2k-1$.
Thus, we achieve near-optimal dependence on $n$ and optimal aperture. While the
sample-size required by our algorithm depends exponentially on $k$, we prove
that such a dependence is {\em unavoidable} when one considers general
mixtures. A sequence of tools contribute to the algorithm, such as
concentration results for random matrices, dimension reduction, moment
estimations, and sensitivity analysis. | computer science |
40,745 | Mining Techniques in Network Security to Enhance Intrusion Detection
Systems | cs.CR | In intrusion detection systems, classifiers still suffer from several
drawbacks such as data dimensionality and dominance, different network feature
types, and data impact on the classification. In this paper two significant
enhancements are presented to solve these drawbacks. The first enhancement is
an improved feature selection using sequential backward search and information
gain. This, in turn, extracts valuable features that enhance positively the
detection rate and reduce the false positive rate. The second enhancement is
transferring nominal network features to numeric ones by exploiting the
discrete random variable and the probability mass function to solve the problem
of different feature types, the problem of data dominance, and data impact on
the classification. The latter is combined to known normalization methods to
achieve a significant hybrid normalization approach. Finally, an intensive and
comparative study approves the efficiency of these enhancements and shows
better performance comparing to other proposed methods. | computer science |
40,746 | Efficient Gradient Estimation for Motor Control Learning | cs.LG | The task of estimating the gradient of a function in the presence of noise is
central to several forms of reinforcement learning, including policy search
methods. We present two techniques for reducing gradient estimation errors in
the presence of observable input noise applied to the control signal. The first
method extends the idea of a reinforcement baseline by fitting a local linear
model to the function whose gradient is being estimated; we show how to find
the linear model that minimizes the variance of the gradient estimate, and how
to estimate the model from data. The second method improves this further by
discounting components of the gradient vector that have high variance. These
methods are applied to the problem of motor control learning, where actuator
noise has a significant influence on behavior. In particular, we apply the
techniques to learn locally optimal controllers for a dart-throwing task using
a simulated three-link arm; we demonstrate that proposed methods significantly
improve the reward function gradient estimate and, consequently, the learning
curve, over existing methods. | computer science |
40,747 | Know Your Personalization: Learning Topic level Personalization in
Online Services | cs.LG | Online service platforms (OSPs), such as search engines, news-websites,
ad-providers, etc., serve highly pe rsonalized content to the user, based on
the profile extracted from his history with the OSP. Although personalization
(generally) leads to a better user experience, it also raises privacy concerns
for the user---he does not know what is present in his profile and more
importantly, what is being used to per sonalize content for him. In this paper,
we capture OSP's personalization for an user in a new data structure called the
person alization vector ($\eta$), which is a weighted vector over a set of
topics, and present techniques to compute it for users of an OSP. Our approach
treats OSPs as black-boxes, and extracts $\eta$ by mining only their output,
specifical ly, the personalized (for an user) and vanilla (without any user
information) contents served, and the differences in these content. We
formulate a new model called Latent Topic Personalization (LTP) that captures
the personalization vector into a learning framework and present efficient
inference algorithms for it. We do extensive experiments for search result
personalization using both data from real Google users and synthetic datasets.
Our results show high accuracy (R-pre = 84%) of LTP in finding personalized
topics. For Google data, our qualitative results show how LTP can also
identifies evidences---queries for results on a topic with high $\eta$ value
were re-ranked. Finally, we show how our approach can be used to build a new
Privacy evaluation framework focused at end-user privacy on commercial OSPs. | computer science |
40,748 | A metric for software vulnerabilities classification | cs.SE | Vulnerability discovery and exploits detection are two wide areas of study in
software engineering. This preliminary work tries to combine existing methods
with machine learning techniques to define a metric classification of
vulnerable computer programs. First a feature set has been defined and later
two models have been tested against real world vulnerabilities. A relation
between the classifier choice and the features has also been outlined. | computer science |
40,749 | Maximally Informative Observables and Categorical Perception | cs.LG | We formulate the problem of perception in the framework of information
theory, and prove that categorical perception is equivalent to the existence of
an observable that has the maximum possible information on the target of
perception. We call such an observable maximally informative. Regardless
whether categorical perception is real, maximally informative observables can
form the basis of a theory of perception. We conclude with the implications of
such a theory for the problem of speech perception. | computer science |
40,750 | Fuzzy soft rough K-Means clustering approach for gene expression data | cs.LG | Clustering is one of the widely used data mining techniques for medical
diagnosis. Clustering can be considered as the most important unsupervised
learning technique. Most of the clustering methods group data based on distance
and few methods cluster data based on similarity. The clustering algorithms
classify gene expression data into clusters and the functionally related genes
are grouped together in an efficient manner. The groupings are constructed such
that the degree of relationship is strong among members of the same cluster and
weak among members of different clusters. In this work, we focus on a
similarity relationship among genes with similar expression patterns so that a
consequential and simple analytical decision can be made from the proposed
Fuzzy Soft Rough K-Means algorithm. The algorithm is developed based on Fuzzy
Soft sets and Rough sets. Comparative analysis of the proposed work is made
with bench mark algorithms like K-Means and Rough K-Means and efficiency of the
proposed algorithm is illustrated in this work by using various cluster
validity measures such as DB index and Xie-Beni index. | computer science |
40,751 | Soft Set Based Feature Selection Approach for Lung Cancer Images | cs.LG | Lung cancer is the deadliest type of cancer for both men and women. Feature
selection plays a vital role in cancer classification. This paper investigates
the feature selection process in Computed Tomographic (CT) lung cancer images
using soft set theory. We propose a new soft set based unsupervised feature
selection algorithm. Nineteen features are extracted from the segmented lung
images using gray level co-occurence matrix (GLCM) and gray level different
matrix (GLDM). In this paper, an efficient Unsupervised Soft Set based Quick
Reduct (SSUSQR) algorithm is presented. This method is used to select features
from the data set and compared with existing rough set based unsupervised
feature selection methods. Then K-Means and Self Organizing Map (SOM)
clustering algorithms are used to cluster the data. The performance of the
feature selection algorithms is evaluated based on performance of clustering
techniques. The results show that the proposed method effectively removes
redundant features. | computer science |
40,752 | Reinforcement learning for port-Hamiltonian systems | cs.SY | Passivity-based control (PBC) for port-Hamiltonian systems provides an
intuitive way of achieving stabilization by rendering a system passive with
respect to a desired storage function. However, in most instances the control
law is obtained without any performance considerations and it has to be
calculated by solving a complex partial differential equation (PDE). In order
to address these issues we introduce a reinforcement learning approach into the
energy-balancing passivity-based control (EB-PBC) method, which is a form of
PBC in which the closed-loop energy is equal to the difference between the
stored and supplied energies. We propose a technique to parameterize EB-PBC
that preserves the systems's PDE matching conditions, does not require the
specification of a global desired Hamiltonian, includes performance criteria,
and is robust to extra non-linearities such as control input saturation. The
parameters of the control law are found using actor-critic reinforcement
learning, enabling learning near-optimal control policies satisfying a desired
closed-loop energy landscape. The advantages are that near-optimal controllers
can be generated using standard energy shaping techniques and that the
solutions learned can be interpreted in terms of energy shaping and damping
injection, which makes it possible to numerically assess stability using
passivity theory. From the reinforcement learning perspective, our proposal
allows for the class of port-Hamiltonian systems to be incorporated in the
actor-critic framework, speeding up the learning thanks to the resulting
parameterization of the policy. The method has been successfully applied to the
pendulum swing-up problem in simulations and real-life experiments. | computer science |
40,753 | Transfer Learning Using Logistic Regression in Credit Scoring | cs.LG | The credit scoring risk management is a fast growing field due to consumer's
credit requests. Credit requests, of new and existing customers, are often
evaluated by classical discrimination rules based on customers information.
However, these kinds of strategies have serious limits and don't take into
account the characteristics difference between current customers and the future
ones. The aim of this paper is to measure credit worthiness for non customers
borrowers and to model potential risk given a heterogeneous population formed
by borrowers customers of the bank and others who are not. We hold on previous
works done in generalized gaussian discrimination and transpose them into the
logistic model to bring out efficient discrimination rules for non customers'
subpopulation.
Therefore we obtain several simple models of connection between parameters of
both logistic models associated respectively to the two subpopulations. The
German credit data set is selected to experiment and to compare these models.
Experimental results show that the use of links between the two subpopulations
improve the classification accuracy for the new loan applicants. | computer science |
40,754 | Fast Solutions to Projective Monotone Linear Complementarity Problems | cs.LG | We present a new interior-point potential-reduction algorithm for solving
monotone linear complementarity problems (LCPs) that have a particular special
structure: their matrix $M\in{\mathbb R}^{n\times n}$ can be decomposed as
$M=\Phi U + \Pi_0$, where the rank of $\Phi$ is $k<n$, and $\Pi_0$ denotes
Euclidean projection onto the nullspace of $\Phi^\top$. We call such LCPs
projective. Our algorithm solves a monotone projective LCP to relative accuracy
$\epsilon$ in $O(\sqrt n \ln(1/\epsilon))$ iterations, with each iteration
requiring $O(nk^2)$ flops. This complexity compares favorably with
interior-point algorithms for general monotone LCPs: these algorithms also
require $O(\sqrt n \ln(1/\epsilon))$ iterations, but each iteration needs to
solve an $n\times n$ system of linear equations, a much higher cost than our
algorithm when $k\ll n$. Our algorithm works even though the solution to a
projective LCP is not restricted to lie in any low-rank subspace. | computer science |
40,755 | A Polynomial Time Algorithm for Lossy Population Recovery | cs.DS | We give a polynomial time algorithm for the lossy population recovery
problem. In this problem, the goal is to approximately learn an unknown
distribution on binary strings of length $n$ from lossy samples: for some
parameter $\mu$ each coordinate of the sample is preserved with probability
$\mu$ and otherwise is replaced by a `?'. The running time and number of
samples needed for our algorithm is polynomial in $n$ and $1/\varepsilon$ for
each fixed $\mu>0$. This improves on algorithm of Wigderson and Yehudayoff that
runs in quasi-polynomial time for any $\mu > 0$ and the polynomial time
algorithm of Dvir et al which was shown to work for $\mu \gtrapprox 0.30$ by
Batman et al. In fact, our algorithm also works in the more general framework
of Batman et al. in which there is no a priori bound on the size of the support
of the distribution. The algorithm we analyze is implicit in previous work; our
main contribution is to analyze the algorithm by showing (via linear
programming duality and connections to complex analysis) that a certain matrix
associated with the problem has a robust local inverse even though its
condition number is exponentially small. A corollary of our result is the first
polynomial time algorithm for learning DNFs in the restriction access model of
Dvir et al. | computer science |
40,756 | Prediction and Clustering in Signed Networks: A Local to Global
Perspective | cs.SI | The study of social networks is a burgeoning research area. However, most
existing work deals with networks that simply encode whether relationships
exist or not. In contrast, relationships in signed networks can be positive
("like", "trust") or negative ("dislike", "distrust"). The theory of social
balance shows that signed networks tend to conform to some local patterns that,
in turn, induce certain global characteristics. In this paper, we exploit both
local as well as global aspects of social balance theory for two fundamental
problems in the analysis of signed networks: sign prediction and clustering.
Motivated by local patterns of social balance, we first propose two families of
sign prediction methods: measures of social imbalance (MOIs), and supervised
learning using high order cycles (HOCs). These methods predict signs of edges
based on triangles and \ell-cycles for relatively small values of \ell.
Interestingly, by examining measures of social imbalance, we show that the
classic Katz measure, which is used widely in unsigned link prediction,
actually has a balance theoretic interpretation when applied to signed
networks. Furthermore, motivated by the global structure of balanced networks,
we propose an effective low rank modeling approach for both sign prediction and
clustering. For the low rank modeling approach, we provide theoretical
performance guarantees via convex relaxations, scale it up to large problem
sizes using a matrix factorization based algorithm, and provide extensive
experimental validation including comparisons with local approaches. Our
experimental results indicate that, by adopting a more global viewpoint of
balance structure, we get significant performance and computational gains in
prediction and clustering tasks on signed networks. Our work therefore
highlights the usefulness of the global aspect of balance theory for the
analysis of signed networks. | computer science |
40,757 | The adaptive Gril estimator with a diverging number of parameters | stat.ME | We consider the problem of variables selection and estimation in linear
regression model in situations where the number of parameters diverges with the
sample size. We propose the adaptive Generalized Ridge-Lasso (\mbox{AdaGril})
which is an extension of the the adaptive Elastic Net. AdaGril incorporates
information redundancy among correlated variables for model selection and
estimation. It combines the strengths of the quadratic regularization and the
adaptively weighted Lasso shrinkage. In this paper, we highlight the grouped
selection property for AdaCnet method (one type of AdaGril) in the equal
correlation case. Under weak conditions, we establish the oracle property of
AdaGril which ensures the optimal large performance when the dimension is high.
Consequently, it achieves both goals of handling the problem of collinearity in
high dimension and enjoys the oracle property. Moreover, we show that AdaGril
estimator achieves a Sparsity Inequality, i. e., a bound in terms of the number
of non-zero components of the 'true' regression coefficient. This bound is
obtained under a similar weak Restricted Eigenvalue (RE) condition used for
Lasso. Simulations studies show that some particular cases of AdaGril
outperform its competitors. | computer science |
40,758 | ML4PG in Computer Algebra verification | cs.LO | ML4PG is a machine-learning extension that provides statistical proof hints
during the process of Coq/SSReflect proof development. In this paper, we use
ML4PG to find proof patterns in the CoqEAL library -- a library that was
devised to verify the correctness of Computer Algebra algorithms. In
particular, we use ML4PG to help us in the formalisation of an efficient
algorithm to compute the inverse of triangular matrices. | computer science |
40,759 | Source Separation using Regularized NMF with MMSE Estimates under GMM
Priors with Online Learning for The Uncertainties | cs.LG | We propose a new method to enforce priors on the solution of the nonnegative
matrix factorization (NMF). The proposed algorithm can be used for denoising or
single-channel source separation (SCSS) applications. The NMF solution is
guided to follow the Minimum Mean Square Error (MMSE) estimates under Gaussian
mixture prior models (GMM) for the source signal. In SCSS applications, the
spectra of the observed mixed signal are decomposed as a weighted linear
combination of trained basis vectors for each source using NMF. In this work,
the NMF decomposition weight matrices are treated as a distorted image by a
distortion operator, which is learned directly from the observed signals. The
MMSE estimate of the weights matrix under GMM prior and log-normal distribution
for the distortion is then found to improve the NMF decomposition results. The
MMSE estimate is embedded within the optimization objective to form a novel
regularized NMF cost function. The corresponding update rules for the new
objectives are derived in this paper. Experimental results show that, the
proposed regularized NMF algorithm improves the source separation performance
compared with using NMF without prior or with other prior models. | computer science |
40,760 | Fast Feature Reduction in intrusion detection datasets | cs.CR | In the most intrusion detection systems (IDS), a system tries to learn
characteristics of different type of attacks by analyzing packets that sent or
received in network. These packets have a lot of features. But not all of them
is required to be analyzed to detect that specific type of attack. Detection
speed and computational cost is another vital matter here, because in these
types of problems, datasets are very huge regularly. In this paper we tried to
propose a very simple and fast feature selection method to eliminate features
with no helpful information on them. Result faster learning in process of
redundant feature omission. We compared our proposed method with three most
successful similarity based feature selection algorithm including Correlation
Coefficient, Least Square Regression Error and Maximal Information Compression
Index. After that we used recommended features by each of these algorithms in
two popular classifiers including: Bayes and KNN classifier to measure the
quality of the recommendations. Experimental result shows that although the
proposed method can't outperform evaluated algorithms with high differences in
accuracy, but in computational cost it has huge superiority over them. | computer science |
40,761 | Bandits with Knapsacks | cs.DS | Multi-armed bandit problems are the predominant theoretical model of
exploration-exploitation tradeoffs in learning, and they have countless
applications ranging from medical trials, to communication networks, to Web
search and advertising. In many of these application domains the learner may be
constrained by one or more supply (or budget) limits, in addition to the
customary limitation on the time horizon. The literature lacks a general model
encompassing these sorts of problems. We introduce such a model, called
"bandits with knapsacks", that combines aspects of stochastic integer
programming with online learning. A distinctive feature of our problem, in
comparison to the existing regret-minimization literature, is that the optimal
policy for a given latent distribution may significantly outperform the policy
that plays the optimal fixed arm. Consequently, achieving sublinear regret in
the bandits-with-knapsacks problem is significantly more challenging than in
conventional bandit problems.
We present two algorithms whose reward is close to the information-theoretic
optimum: one is based on a novel "balanced exploration" paradigm, while the
other is a primal-dual algorithm that uses multiplicative updates. Further, we
prove that the regret achieved by both algorithms is optimal up to
polylogarithmic factors. We illustrate the generality of the problem by
presenting applications in a number of different domains including electronic
commerce, routing, and scheduling. As one example of a concrete application, we
consider the problem of dynamic posted pricing with limited supply and obtain
the first algorithm whose regret, with respect to the optimal dynamic policy,
is sublinear in the supply. | computer science |
40,762 | HRF estimation improves sensitivity of fMRI encoding and decoding models | cs.LG | Extracting activation patterns from functional Magnetic Resonance Images
(fMRI) datasets remains challenging in rapid-event designs due to the inherent
delay of blood oxygen level-dependent (BOLD) signal. The general linear model
(GLM) allows to estimate the activation from a design matrix and a fixed
hemodynamic response function (HRF). However, the HRF is known to vary
substantially between subjects and brain regions. In this paper, we propose a
model for jointly estimating the hemodynamic response function (HRF) and the
activation patterns via a low-rank representation of task effects.This model is
based on the linearity assumption behind the GLM and can be computed using
standard gradient-based solvers. We use the activation patterns computed by our
model as input data for encoding and decoding studies and report performance
improvement in both settings. | computer science |
40,763 | Real Time Bid Optimization with Smooth Budget Delivery in Online
Advertising | cs.GT | Today, billions of display ad impressions are purchased on a daily basis
through a public auction hosted by real time bidding (RTB) exchanges. A
decision has to be made for advertisers to submit a bid for each selected RTB
ad request in milliseconds. Restricted by the budget, the goal is to buy a set
of ad impressions to reach as many targeted users as possible. A desired action
(conversion), advertiser specific, includes purchasing a product, filling out a
form, signing up for emails, etc. In addition, advertisers typically prefer to
spend their budget smoothly over the time in order to reach a wider range of
audience accessible throughout a day and have a sustainable impact. However,
since the conversions occur rarely and the occurrence feedback is normally
delayed, it is very challenging to achieve both budget and performance goals at
the same time. In this paper, we present an online approach to the smooth
budget delivery while optimizing for the conversion performance. Our algorithm
tries to select high quality impressions and adjust the bid price based on the
prior performance distribution in an adaptive manner by distributing the budget
optimally across time. Our experimental results from real advertising campaigns
demonstrate the effectiveness of our proposed approach. | computer science |
40,764 | Scalable Audience Reach Estimation in Real-time Online Advertising | cs.LG | Online advertising has been introduced as one of the most efficient methods
of advertising throughout the recent years. Yet, advertisers are concerned
about the efficiency of their online advertising campaigns and consequently,
would like to restrict their ad impressions to certain websites and/or certain
groups of audience. These restrictions, known as targeting criteria, limit the
reachability for better performance. This trade-off between reachability and
performance illustrates a need for a forecasting system that can quickly
predict/estimate (with good accuracy) this trade-off. Designing such a system
is challenging due to (a) the huge amount of data to process, and, (b) the need
for fast and accurate estimates. In this paper, we propose a distributed fault
tolerant system that can generate such estimates fast with good accuracy. The
main idea is to keep a small representative sample in memory across multiple
machines and formulate the forecasting problem as queries against the sample.
The key challenge is to find the best strata across the past data, perform
multivariate stratified sampling while ensuring fuzzy fall-back to cover the
small minorities. Our results show a significant improvement over the uniform
and simple stratified sampling strategies which are currently widely used in
the industry. | computer science |
40,765 | Qualitative detection of oil adulteration with machine learning
approaches | cs.CE | The study focused on the machine learning analysis approaches to identify the
adulteration of 9 kinds of edible oil qualitatively and answered the following
three questions: Is the oil sample adulterant? How does it constitute? What is
the main ingredient of the adulteration oil? After extracting the
high-performance liquid chromatography (HPLC) data on triglyceride from 370 oil
samples, we applied the adaptive boosting with multi-class Hamming loss
(AdaBoost.MH) to distinguish the oil adulteration in contrast with the support
vector machine (SVM). Further, we regarded the adulterant oil and the pure oil
samples as ones with multiple labels and with only one label, respectively.
Then multi-label AdaBoost.MH and multi-label learning vector quantization
(ML-LVQ) model were built to determine the ingredients and their relative ratio
in the adulteration oil. The experimental results on six measures show that
ML-LVQ achieves better performance than multi-label AdaBoost.MH. | computer science |
40,766 | Transfer Learning for Content-Based Recommender Systems using Tree
Matching | cs.LG | In this paper we present a new approach to content-based transfer learning
for solving the data sparsity problem in cases when the users' preferences in
the target domain are either scarce or unavailable, but the necessary
information on the preferences exists in another domain. We show that training
a system to use such information across domains can produce better performance.
Specifically, we represent users' behavior patterns based on topological graph
structures. Each behavior pattern represents the behavior of a set of users,
when the users' behavior is defined as the items they rated and the items'
rating values. In the next step we find a correlation between behavior patterns
in the source domain and behavior patterns in the target domain. This mapping
is considered a bridge between the two domains. Based on the correlation and
content-attributes of the items, we train a machine learning model to predict
users' ratings in the target domain. When we compare our approach to the
popularity approach and KNN-cross-domain on a real world dataset, the results
show that on an average of 83$%$ of the cases our approach outperforms both
methods. | computer science |
40,767 | Multi-View Learning for Web Spam Detection | cs.IR | Spam pages are designed to maliciously appear among the top search results by
excessive usage of popular terms. Therefore, spam pages should be removed using
an effective and efficient spam detection system. Previous methods for web spam
classification used several features from various information sources (page
contents, web graph, access logs, etc.) to detect web spam. In this paper, we
follow page-level classification approach to build fast and scalable spam
filters. We show that each web page can be classified with satisfiable accuracy
using only its own HTML content. In order to design a multi-view classification
system, we used state-of-the-art spam classification methods with distinct
feature sets (views) as the base classifiers. Then, a fusion model is learned
to combine the output of the base classifiers and make final prediction.
Results show that multi-view learning significantly improves the classification
performance, namely AUC by 22%, while providing linear speedup for parallel
execution. | computer science |
40,768 | Generalized Centroid Estimators in Bioinformatics | cs.LG | In a number of estimation problems in bioinformatics, accuracy measures of
the target problem are usually given, and it is important to design estimators
that are suitable to those accuracy measures. However, there is often a
discrepancy between an employed estimator and a given accuracy measure of the
problem. In this study, we introduce a general class of efficient estimators
for estimation problems on high-dimensional binary spaces, which representmany
fundamental problems in bioinformatics. Theoretical analysis reveals that the
proposed estimators generally fit with commonly-used accuracy measures (e.g.
sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in
many cases, and cover a wide range of problems in bioinformatics from the
viewpoint of the principle of maximum expected accuracy (MEA). It is also shown
that some important algorithms in bioinformatics can be interpreted in a
unified manner. Not only the concept presented in this paper gives a useful
framework to design MEA-based estimators but also it is highly extendable and
sheds new light on many problems in bioinformatics. | computer science |
40,769 | Robustness of Random Forest-based gene selection methods | cs.LG | Gene selection is an important part of microarray data analysis because it
provides information that can lead to a better mechanistic understanding of an
investigated phenomenon. At the same time, gene selection is very difficult
because of the noisy nature of microarray data. As a consequence, gene
selection is often performed with machine learning methods. The Random Forest
method is particularly well suited for this purpose. In this work, four
state-of-the-art Random Forest-based feature selection methods were compared in
a gene selection context. The analysis focused on the stability of selection
because, although it is necessary for determining the significance of results,
it is often ignored in similar studies.
The comparison of post-selection accuracy in the validation of Random Forest
classifiers revealed that all investigated methods were equivalent in this
context. However, the methods substantially differed with respect to the number
of selected genes and the stability of selection. Of the analysed methods, the
Boruta algorithm predicted the most genes as potentially important.
The post-selection classifier error rate, which is a frequently used measure,
was found to be a potentially deceptive measure of gene selection quality. When
the number of consistently selected genes was considered, the Boruta algorithm
was clearly the best. Although it was also the most computationally intensive
method, the Boruta algorithm's computational demands could be reduced to levels
comparable to those of other algorithms by replacing the Random Forest
importance with a comparable measure from Random Ferns (a similar but
simplified classifier). Despite their design assumptions, the minimal optimal
selection methods, were found to select a high fraction of false positives. | computer science |
40,770 | Power to the Points: Validating Data Memberships in Clusterings | cs.LG | A clustering is an implicit assignment of labels of points, based on
proximity to other points. It is these labels that are then used for downstream
analysis (either focusing on individual clusters, or identifying
representatives of clusters and so on). Thus, in order to trust a clustering as
a first step in exploratory data analysis, we must trust the labels assigned to
individual data. Without supervision, how can we validate this assignment? In
this paper, we present a method to attach affinity scores to the implicit
labels of individual points in a clustering. The affinity scores capture the
confidence level of the cluster that claims to "own" the point. This method is
very general: it can be used with clusterings derived from Euclidean data,
kernelized data, or even data derived from information spaces. It smoothly
incorporates importance functions on clusters, allowing us to eight different
clusters differently. It is also efficient: assigning an affinity score to a
point depends only polynomially on the number of clusters and is independent of
the number of points in the data. The dimensionality of the underlying space
only appears in preprocessing. We demonstrate the value of our approach with an
experimental study that illustrates the use of these scores in different data
analysis tasks, as well as the efficiency and flexibility of the method. We
also demonstrate useful visualizations of these scores; these might prove
useful within an interactive analytics framework. | computer science |
40,771 | Zero-sum repeated games: Counterexamples to the existence of the
asymptotic value and the conjecture
$\operatorname{maxmin}=\operatorname{lim}v_n$ | math.OC | Mertens [In Proceedings of the International Congress of Mathematicians
(Berkeley, Calif., 1986) (1987) 1528-1577 Amer. Math. Soc.] proposed two
general conjectures about repeated games: the first one is that, in any
two-person zero-sum repeated game, the asymptotic value exists, and the second
one is that, when Player 1 is more informed than Player 2, in the long run
Player 1 is able to guarantee the asymptotic value. We disprove these two
long-standing conjectures by providing an example of a zero-sum repeated game
with public signals and perfect observation of the actions, where the value of
the $\lambda$-discounted game does not converge when $\lambda$ goes to 0. The
aforementioned example involves seven states, two actions and two signals for
each player. Remarkably, players observe the payoffs, and play in turn. | computer science |
40,772 | Supervised Feature Selection for Diagnosis of Coronary Artery Disease
Based on Genetic Algorithm | cs.LG | Feature Selection (FS) has become the focus of much research on decision
support systems areas for which data sets with tremendous number of variables
are analyzed. In this paper we present a new method for the diagnosis of
Coronary Artery Diseases (CAD) founded on Genetic Algorithm (GA) wrapped Bayes
Naive (BN) based FS. Basically, CAD dataset contains two classes defined with
13 features. In GA BN algorithm, GA generates in each iteration a subset of
attributes that will be evaluated using the BN in the second step of the
selection procedure. The final set of attribute contains the most relevant
feature model that increases the accuracy. The algorithm in this case produces
85.50% classification accuracy in the diagnosis of CAD. Thus, the asset of the
Algorithm is then compared with the use of Support Vector Machine (SVM),
MultiLayer Perceptron (MLP) and C4.5 decision tree Algorithm. The result of
classification accuracy for those algorithms are respectively 83.5%, 83.16% and
80.85%. Consequently, the GA wrapped BN Algorithm is correspondingly compared
with other FS algorithms. The Obtained results have shown very promising
outcomes for the diagnosis of CAD. | computer science |
40,773 | Speeding-Up Convergence via Sequential Subspace Optimization: Current
State and Future Directions | cs.NA | This is an overview paper written in style of research proposal. In recent
years we introduced a general framework for large-scale unconstrained
optimization -- Sequential Subspace Optimization (SESOP) and demonstrated its
usefulness for sparsity-based signal/image denoising, deconvolution,
compressive sensing, computed tomography, diffraction imaging, support vector
machines. We explored its combination with Parallel Coordinate Descent and
Separable Surrogate Function methods, obtaining state of the art results in
above-mentioned areas. There are several methods, that are faster than plain
SESOP under specific conditions: Trust region Newton method - for problems with
easily invertible Hessian matrix; Truncated Newton method - when fast
multiplication by Hessian is available; Stochastic optimization methods - for
problems with large stochastic-type data; Multigrid methods - for problems with
nested multilevel structure. Each of these methods can be further improved by
merge with SESOP. One can also accelerate Augmented Lagrangian method for
constrained optimization problems and Alternating Direction Method of
Multipliers for problems with separable objective function and non-separable
constraints. | computer science |
40,774 | Robust Hierarchical Clustering | cs.LG | One of the most widely used techniques for data clustering is agglomerative
clustering. Such algorithms have been long used across many different fields
ranging from computational biology to social sciences to computer vision in
part because their output is easy to interpret. Unfortunately, it is well
known, however, that many of the classic agglomerative clustering algorithms
are not robust to noise. In this paper we propose and analyze a new robust
algorithm for bottom-up agglomerative clustering. We show that our algorithm
can be used to cluster accurately in cases where the data satisfies a number of
natural properties and where the traditional agglomerative algorithms fail. We
also show how to adapt our algorithm to the inductive setting where our given
data is only a small random sample of the entire data set. Experimental
evaluations on synthetic and real world data sets show that our algorithm
achieves better performance than other hierarchical algorithms in the presence
of noise. | computer science |
40,775 | Modeling Attractiveness and Multiple Clicks in Sponsored Search Results | cs.IR | Click models are an important tool for leveraging user feedback, and are used
by commercial search engines for surfacing relevant search results. However,
existing click models are lacking in two aspects. First, they do not share
information across search results when computing attractiveness. Second, they
assume that users interact with the search results sequentially. Based on our
analysis of the click logs of a commercial search engine, we observe that the
sequential scan assumption does not always hold, especially for sponsored
search results. To overcome the above two limitations, we propose a new click
model. Our key insight is that sharing information across search results helps
in identifying important words or key-phrases which can then be used to
accurately compute attractiveness of a search result. Furthermore, we argue
that the click probability of a position as well as its attractiveness changes
during a user session and depends on the user's past click experience. Our
model seamlessly incorporates the effect of externalities (quality of other
search results displayed in response to a user query), user fatigue, as well as
pre and post-click relevance of a sponsored search result. We propose an
efficient one-pass inference scheme and empirically evaluate the performance of
our model via extensive experiments using the click logs of a large commercial
search engine. | computer science |
40,776 | Least Squares Policy Iteration with Instrumental Variables vs. Direct
Policy Search: Comparison Against Optimal Benchmarks Using Energy Storage | math.OC | This paper studies approximate policy iteration (API) methods which use
least-squares Bellman error minimization for policy evaluation. We address
several of its enhancements, namely, Bellman error minimization using
instrumental variables, least-squares projected Bellman error minimization, and
projected Bellman error minimization using instrumental variables. We prove
that for a general discrete-time stochastic control problem, Bellman error
minimization using instrumental variables is equivalent to both variants of
projected Bellman error minimization. An alternative to these API methods is
direct policy search based on knowledge gradient. The practical performance of
these three approximate dynamic programming methods are then investigated in
the context of an application in energy storage, integrated with an
intermittent wind energy supply to fully serve a stochastic time-varying
electricity demand. We create a library of test problems using real-world data
and apply value iteration to find their optimal policies. These benchmarks are
then used to compare the developed policies. Our analysis indicates that API
with instrumental variables Bellman error minimization prominently outperforms
API with least-squares Bellman error minimization. However, these approaches
underperform our direct policy search implementation. | computer science |
40,777 | PSMACA: An Automated Protein Structure Prediction Using MACA (Multiple
Attractor Cellular Automata) | cs.CE | Protein Structure Predication from sequences of amino acid has gained a
remarkable attention in recent years. Even though there are some prediction
techniques addressing this problem, the approximate accuracy in predicting the
protein structure is closely 75%. An automated procedure was evolved with MACA
(Multiple Attractor Cellular Automata) for predicting the structure of the
protein. Most of the existing approaches are sequential which will classify the
input into four major classes and these are designed for similar sequences.
PSMACA is designed to identify ten classes from the sequences that share
twilight zone similarity and identity with the training sequences. This method
also predicts three states (helix, strand, and coil) for the structure. Our
comprehensive design considers 10 feature selection methods and 4 classifiers
to develop MACA (Multiple Attractor Cellular Automata) based classifiers that
are build for each of the ten classes. We have tested the proposed classifier
with twilight-zone and 1-high-similarity benchmark datasets with over three
dozens of modern competing predictors shows that PSMACA provides the best
overall accuracy that ranges between 77% and 88.7% depending on the dataset. | computer science |
40,778 | Use Case Point Approach Based Software Effort Estimation using Various
Support Vector Regression Kernel Methods | cs.SE | The job of software effort estimation is a critical one in the early stages
of the software development life cycle when the details of requirements are
usually not clearly identified. Various optimization techniques help in
improving the accuracy of effort estimation. The Support Vector Regression
(SVR) is one of several different soft-computing techniques that help in
getting optimal estimated values. The idea of SVR is based upon the computation
of a linear regression function in a high dimensional feature space where the
input data are mapped via a nonlinear function. Further, the SVR kernel methods
can be applied in transforming the input data and then based on these
transformations, an optimal boundary between the possible outputs can be
obtained. The main objective of the research work carried out in this paper is
to estimate the software effort using use case point approach. The use case
point approach relies on the use case diagram to estimate the size and effort
of software projects. Then, an attempt has been made to optimize the results
obtained from use case point analysis using various SVR kernel methods to
achieve better prediction accuracy. | computer science |
40,779 | Infinite Mixed Membership Matrix Factorization | cs.LG | Rating and recommendation systems have become a popular application area for
applying a suite of machine learning techniques. Current approaches rely
primarily on probabilistic interpretations and extensions of matrix
factorization, which factorizes a user-item ratings matrix into latent user and
item vectors. Most of these methods fail to model significant variations in
item ratings from otherwise similar users, a phenomenon known as the "Napoleon
Dynamite" effect. Recent efforts have addressed this problem by adding a
contextual bias term to the rating, which captures the mood under which a user
rates an item or the context in which an item is rated by a user. In this work,
we extend this model in a nonparametric sense by learning the optimal number of
moods or contexts from the data, and derive Gibbs sampling inference procedures
for our model. We evaluate our approach on the MovieLens 1M dataset, and show
significant improvements over the optimal parametric baseline, more than twice
the improvements previously encountered for this task. We also extract and
evaluate a DBLP dataset, wherein we predict the number of papers co-authored by
two authors, and present improvements over the parametric baseline on this
alternative domain as well. | computer science |
40,780 | A Multiagent Reinforcement Learning Algorithm with Non-linear Dynamics | cs.LG | Several multiagent reinforcement learning (MARL) algorithms have been
proposed to optimize agents decisions. Due to the complexity of the problem,
the majority of the previously developed MARL algorithms assumed agents either
had some knowledge of the underlying game (such as Nash equilibria) and/or
observed other agents actions and the rewards they received.
We introduce a new MARL algorithm called the Weighted Policy Learner (WPL),
which allows agents to reach a Nash Equilibrium (NE) in benchmark
2-player-2-action games with minimum knowledge. Using WPL, the only feedback an
agent needs is its own local reward (the agent does not observe other agents
actions or rewards). Furthermore, WPL does not assume that agents know the
underlying game or the corresponding Nash Equilibrium a priori. We
experimentally show that our algorithm converges in benchmark
two-player-two-action games. We also show that our algorithm converges in the
challenging Shapleys game where previous MARL algorithms failed to converge
without knowing the underlying game or the NE. Furthermore, we show that WPL
outperforms the state-of-the-art algorithms in a more realistic setting of 100
agents interacting and learning concurrently.
An important aspect of understanding the behavior of a MARL algorithm is
analyzing the dynamics of the algorithm: how the policies of multiple learning
agents evolve over time as agents interact with one another. Such an analysis
not only verifies whether agents using a given MARL algorithm will eventually
converge, but also reveals the behavior of the MARL algorithm prior to
convergence. We analyze our algorithm in two-player-two-action games and show
that symbolically proving WPLs convergence is difficult, because of the
non-linear nature of WPLs dynamics, unlike previous MARL algorithms that had
either linear or piece-wise-linear dynamics. Instead, we numerically solve WPLs
dynamics differential equations and compare the solution to the dynamics of
previous MARL algorithms. | computer science |
40,781 | RoxyBot-06: Stochastic Prediction and Optimization in TAC Travel | cs.GT | In this paper, we describe our autonomous bidding agent, RoxyBot, who emerged
victorious in the travel division of the 2006 Trading Agent Competition in a
photo finish. At a high level, the design of many successful trading agents can
be summarized as follows: (i) price prediction: build a model of market prices;
and (ii) optimization: solve for an approximately optimal set of bids, given
this model. To predict, RoxyBot builds a stochastic model of market prices by
simulating simultaneous ascending auctions. To optimize, RoxyBot relies on the
sample average approximation method, a stochastic optimization technique. | computer science |
40,782 | An Active Learning Approach for Jointly Estimating Worker Performance
and Annotation Reliability with Crowdsourced Data | cs.LG | Crowdsourcing platforms offer a practical solution to the problem of
affordably annotating large datasets for training supervised classifiers.
Unfortunately, poor worker performance frequently threatens to compromise
annotation reliability, and requesting multiple labels for every instance can
lead to large cost increases without guaranteeing good results. Minimizing the
required training samples using an active learning selection procedure reduces
the labeling requirement but can jeopardize classifier training by focusing on
erroneous annotations. This paper presents an active learning approach in which
worker performance, task difficulty, and annotation reliability are jointly
estimated and used to compute the risk function guiding the sample selection
procedure. We demonstrate that the proposed approach, which employs active
learning with Bayesian networks, significantly improves training accuracy and
correctly ranks the expertise of unknown labelers in the presence of annotation
noise. | computer science |
40,783 | Policy Invariance under Reward Transformations for General-Sum
Stochastic Games | cs.GT | We extend the potential-based shaping method from Markov decision processes
to multi-player general-sum stochastic games. We prove that the Nash equilibria
in a stochastic game remains unchanged after potential-based shaping is applied
to the environment. The property of policy invariance provides a possible way
of speeding convergence when learning to play a stochastic game. | computer science |
40,784 | Towards the selection of patients requiring ICD implantation by
automatic classification from Holter monitoring indices | cs.LG | The purpose of this study is to optimize the selection of prophylactic
cardioverter defibrillator implantation candidates. Currently, the main
criterion for implantation is a low Left Ventricular Ejection Fraction (LVEF)
whose specificity is relatively poor. We designed two classifiers aimed to
predict, from long term ECG recordings (Holter), whether a low-LVEF patient is
likely or not to undergo ventricular arrhythmia in the next six months. One
classifier is a single hidden layer neural network whose variables are the most
relevant features extracted from Holter recordings, and the other classifier
has a structure that capitalizes on the physiological decomposition of the
arrhythmogenic factors into three disjoint groups: the myocardial substrate,
the triggers and the autonomic nervous system (ANS). In this ad hoc network,
the features were assigned to each group; one neural network classifier per
group was designed and its complexity was optimized. The outputs of the
classifiers were fed to a single neuron that provided the required probability
estimate. The latter was thresholded for final discrimination A dataset
composed of 186 pre-implantation 30-mn Holter recordings of patients equipped
with an implantable cardioverter defibrillator (ICD) in primary prevention was
used in order to design and test this classifier. 44 out of 186 patients
underwent at least one treated ventricular arrhythmia during the six-month
follow-up period. Performances of the designed classifier were evaluated using
a cross-test strategy that consists in splitting the database into several
combinations of a training set and a test set. The average arrhythmia
prediction performances of the ad-hoc classifier are NPV = 77% $\pm$ 13% and
PPV = 31% $\pm$ 19% (Negative Predictive Value $\pm$ std, Positive Predictive
Value $\pm$ std). According to our study, improving prophylactic
ICD-implantation candidate selection by automatic classification from ECG
features may be possible, but the availability of a sizable dataset appears to
be essential to decrease the number of False Negatives. | computer science |
40,785 | General factorization framework for context-aware recommendations | cs.IR | Context-aware recommendation algorithms focus on refining recommendations by
considering additional information, available to the system. This topic has
gained a lot of attention recently. Among others, several factorization methods
were proposed to solve the problem, although most of them assume explicit
feedback which strongly limits their real-world applicability. While these
algorithms apply various loss functions and optimization strategies, the
preference modeling under context is less explored due to the lack of tools
allowing for easy experimentation with various models. As context dimensions
are introduced beyond users and items, the space of possible preference models
and the importance of proper modeling largely increases.
In this paper we propose a General Factorization Framework (GFF), a single
flexible algorithm that takes the preference model as an input and computes
latent feature matrices for the input dimensions. GFF allows us to easily
experiment with various linear models on any context-aware recommendation task,
be it explicit or implicit feedback based. The scaling properties makes it
usable under real life circumstances as well.
We demonstrate the framework's potential by exploring various preference
models on a 4-dimensional context-aware problem with contexts that are
available for almost any real life datasets. We show in our experiments --
performed on five real life, implicit feedback datasets -- that proper
preference modelling significantly increases recommendation accuracy, and
previously unused models outperform the traditional ones. Novel models in GFF
also outperform state-of-the-art factorization algorithms.
We also extend the method to be fully compliant to the Multidimensional
Dataspace Model, one of the most extensive data models of context-enriched
data. Extended GFF allows the seamless incorporation of information into the
fac[truncated] | computer science |
40,786 | miRNA and Gene Expression based Cancer Classification using Self-
Learning and Co-Training Approaches | cs.CE | miRNA and gene expression profiles have been proved useful for classifying
cancer samples. Efficient classifiers have been recently sought and developed.
A number of attempts to classify cancer samples using miRNA/gene expression
profiles are known in literature. However, the use of semi-supervised learning
models have been used recently in bioinformatics, to exploit the huge corpuses
of publicly available sets. Using both labeled and unlabeled sets to train
sample classifiers, have not been previously considered when gene and miRNA
expression sets are used. Moreover, there is a motivation to integrate both
miRNA and gene expression for a semi-supervised cancer classification as that
provides more information on the characteristics of cancer samples. In this
paper, two semi-supervised machine learning approaches, namely self-learning
and co-training, are adapted to enhance the quality of cancer sample
classification. These approaches exploit the huge public corpuses to enrich the
training data. In self-learning, miRNA and gene based classifiers are enhanced
independently. While in co-training, both miRNA and gene expression profiles
are used simultaneously to provide different views of cancer samples. To our
knowledge, it is the first attempt to apply these learning approaches to cancer
classification. The approaches were evaluated using breast cancer,
hepatocellular carcinoma (HCC) and lung cancer expression sets. Results show up
to 20% improvement in F1-measure over Random Forests and SVM classifiers.
Co-Training also outperforms Low Density Separation (LDS) approach by around
25% improvement in F1-measure in breast cancer. | computer science |
40,787 | HMACA: Towards Proposing a Cellular Automata Based Tool for Protein
Coding, Promoter Region Identification and Protein Structure Prediction | cs.CE | Human body consists of lot of cells, each cell consist of DeOxaRibo Nucleic
Acid (DNA). Identifying the genes from the DNA sequences is a very difficult
task. But identifying the coding regions is more complex task compared to the
former. Identifying the protein which occupy little place in genes is a really
challenging issue. For understating the genes coding region analysis plays an
important role. Proteins are molecules with macro structure that are
responsible for a wide range of vital biochemical functions, which includes
acting as oxygen, cell signaling, antibody production, nutrient transport and
building up muscle fibers. Promoter region identification and protein structure
prediction has gained a remarkable attention in recent years. Even though there
are some identification techniques addressing this problem, the approximate
accuracy in identifying the promoter region is closely 68% to 72%. We have
developed a Cellular Automata based tool build with hybrid multiple attractor
cellular automata (HMACA) classifier for protein coding region, promoter region
identification and protein structure prediction which predicts the protein and
promoter regions with an accuracy of 76%. This tool also predicts the structure
of protein with an accuracy of 80%. | computer science |
40,788 | Numerical weather prediction or stochastic modeling: an objective
criterion of choice for the global radiation forecasting | stat.AP | Numerous methods exist and were developed for global radiation forecasting.
The two most popular types are the numerical weather predictions (NWP) and the
predictions using stochastic approaches. We propose to compute a parameter
noted constructed in part from the mutual information which is a quantity that
measures the mutual dependence of two variables. Both of these are calculated
with the objective to establish the more relevant method between NWP and
stochastic models concerning the current problem. | computer science |
40,789 | Iterative Universal Hash Function Generator for Minhashing | cs.LG | Minhashing is a technique used to estimate the Jaccard Index between two sets
by exploiting the probability of collision in a random permutation. In order to
speed up the computation, a random permutation can be approximated by using an
universal hash function such as the $h_{a,b}$ function proposed by Carter and
Wegman. A better estimate of the Jaccard Index can be achieved by using many of
these hash functions, created at random. In this paper a new iterative
procedure to generate a set of $h_{a,b}$ functions is devised that eliminates
the need for a list of random values and avoid the multiplication operation
during the calculation. The properties of the generated hash functions remains
that of an universal hash function family. This is possible due to the random
nature of features occurrence on sparse datasets. Results show that the
uniformity of hashing the features is maintaned while obtaining a speed up of
up to $1.38$ compared to the traditional approach. | computer science |
40,790 | Identification of Protein Coding Regions in Genomic DNA Using
Unsupervised FMACA Based Pattern Classifier | cs.CE | Genes carry the instructions for making proteins that are found in a cell as
a specific sequence of nucleotides that are found in DNA molecules. But, the
regions of these genes that code for proteins may occupy only a small region of
the sequence. Identifying the coding regions play a vital role in understanding
these genes. In this paper we propose a unsupervised Fuzzy Multiple Attractor
Cellular Automata (FMCA) based pattern classifier to identify the coding region
of a DNA sequence. We propose a distinct K-Means algorithm for designing FMACA
classifier which is simple, efficient and produces more accurate classifier
than that has previously been obtained for a range of different sequence
lengths. Experimental results confirm the scalability of the proposed
Unsupervised FCA based classifier to handle large volume of datasets
irrespective of the number of classes, tuples and attributes. Good
classification accuracy has been established. | computer science |
40,791 | Security Evaluation of Support Vector Machines in Adversarial
Environments | cs.LG | Support Vector Machines (SVMs) are among the most popular classification
techniques adopted in security applications like malware detection, intrusion
detection, and spam filtering. However, if SVMs are to be incorporated in
real-world security systems, they must be able to cope with attack patterns
that can either mislead the learning algorithm (poisoning), evade detection
(evasion), or gain information about their internal parameters (privacy
breaches). The main contributions of this chapter are twofold. First, we
introduce a formal general framework for the empirical evaluation of the
security of machine-learning systems. Second, according to our framework, we
demonstrate the feasibility of evasion, poisoning and privacy attacks against
SVMs in real-world security problems. For each attack technique, we evaluate
its impact and discuss whether (and how) it can be countered through an
adversary-aware design of SVMs. Our experiments are easily reproducible thanks
to open-source code that we have made available, together with all the employed
datasets, on a public repository. | computer science |
40,792 | Empirically Evaluating Multiagent Learning Algorithms | cs.GT | There exist many algorithms for learning how to play repeated bimatrix games.
Most of these algorithms are justified in terms of some sort of theoretical
guarantee. On the other hand, little is known about the empirical performance
of these algorithms. Most such claims in the literature are based on small
experiments, which has hampered understanding as well as the development of new
multiagent learning (MAL) algorithms. We have developed a new suite of tools
for running multiagent experiments: the MultiAgent Learning Testbed (MALT).
These tools are designed to facilitate larger and more comprehensive
experiments by removing the need to build one-off experimental code. MALT also
provides baseline implementations of many MAL algorithms, hopefully eliminating
or reducing differences between algorithm implementations and increasing the
reproducibility of results. Using this test suite, we ran an experiment
unprecedented in size. We analyzed the results according to a variety of
performance metrics including reward, maxmin distance, regret, and several
notions of equilibrium convergence. We confirmed several pieces of conventional
wisdom, but also discovered some surprising results. For example, we found that
single-agent $Q$-learning outperformed many more complicated and more modern
MAL algorithms. | computer science |
40,793 | Local Gaussian Regression | cs.LG | Locally weighted regression was created as a nonparametric learning method
that is computationally efficient, can learn from very large amounts of data
and add data incrementally. An interesting feature of locally weighted
regression is that it can work with spatially varying length scales, a
beneficial property, for instance, in control problems. However, it does not
provide a generative model for function values and requires training and test
data to be generated identically, independently. Gaussian (process) regression,
on the other hand, provides a fully generative model without significant formal
requirements on the distribution of training data, but has much higher
computational cost and usually works with one global scale per input dimension.
Using a localising function basis and approximate inference techniques, we take
Gaussian (process) regression to increasingly localised properties and toward
the same computational complexity class as locally weighted regression. | computer science |
40,794 | Localized epidemic detection in networks with overwhelming noise | cs.SI | We consider the problem of detecting an epidemic in a population where
individual diagnoses are extremely noisy. The motivation for this problem is
the plethora of examples (influenza strains in humans, or computer viruses in
smartphones, etc.) where reliable diagnoses are scarce, but noisy data
plentiful. In flu/phone-viruses, exceedingly few infected people/phones are
professionally diagnosed (only a small fraction go to a doctor) but less
reliable secondary signatures (e.g., people staying home, or
greater-than-typical upload activity) are more readily available. These
secondary data are often plagued by unreliability: many people with the flu do
not stay home, and many people that stay home do not have the flu. This paper
identifies the precise regime where knowledge of the contact network enables
finding the needle in the haystack: we provide a distributed, efficient and
robust algorithm that can correctly identify the existence of a spreading
epidemic from highly unreliable local data. Our algorithm requires only
local-neighbor knowledge of this graph, and in a broad array of settings that
we describe, succeeds even when false negatives and false positives make up an
overwhelming fraction of the data available. Our results show it succeeds in
the presence of partial information about the contact network, and also when
there is not a single "patient zero", but rather many (hundreds, in our
examples) of initial patient-zeroes, spread across the graph. | computer science |
40,795 | Dictionary Learning over Distributed Models | cs.LG | In this paper, we consider learning dictionary models over a network of
agents, where each agent is only in charge of a portion of the dictionary
elements. This formulation is relevant in Big Data scenarios where large
dictionary models may be spread over different spatial locations and it is not
feasible to aggregate all dictionaries in one location due to communication and
privacy considerations. We first show that the dual function of the inference
problem is an aggregation of individual cost functions associated with
different agents, which can then be minimized efficiently by means of diffusion
strategies. The collaborative inference step generates dual variables that are
used by the agents to update their dictionaries without the need to share these
dictionaries or even the coefficient models for the training data. This is a
powerful property that leads to an effective distributed procedure for learning
dictionaries over large networks (e.g., hundreds of agents in our experiments).
Furthermore, the proposed learning strategy operates in an online manner and is
able to respond to streaming data, where each data sample is presented to the
network once. | computer science |
40,796 | Characterizing the Sample Complexity of Private Learners | cs.CR | In 2008, Kasiviswanathan et al. defined private learning as a combination of
PAC learning and differential privacy. Informally, a private learner is applied
to a collection of labeled individual information and outputs a hypothesis
while preserving the privacy of each individual. Kasiviswanathan et al. gave a
generic construction of private learners for (finite) concept classes, with
sample complexity logarithmic in the size of the concept class. This sample
complexity is higher than what is needed for non-private learners, hence
leaving open the possibility that the sample complexity of private learning may
be sometimes significantly higher than that of non-private learning.
We give a combinatorial characterization of the sample size sufficient and
necessary to privately learn a class of concepts. This characterization is
analogous to the well known characterization of the sample complexity of
non-private learning in terms of the VC dimension of the concept class. We
introduce the notion of probabilistic representation of a concept class, and
our new complexity measure RepDim corresponds to the size of the smallest
probabilistic representation of the concept class.
We show that any private learning algorithm for a concept class C with sample
complexity m implies RepDim(C)=O(m), and that there exists a private learning
algorithm with sample complexity m=O(RepDim(C)). We further demonstrate that a
similar characterization holds for the database size needed for privately
computing a large class of optimization problems and also for the well studied
problem of private data release. | computer science |
40,797 | Computational Limits for Matrix Completion | cs.CC | Matrix Completion is the problem of recovering an unknown real-valued
low-rank matrix from a subsample of its entries. Important recent results show
that the problem can be solved efficiently under the assumption that the
unknown matrix is incoherent and the subsample is drawn uniformly at random.
Are these assumptions necessary?
It is well known that Matrix Completion in its full generality is NP-hard.
However, little is known if make additional assumptions such as incoherence and
permit the algorithm to output a matrix of slightly higher rank. In this paper
we prove that Matrix Completion remains computationally intractable even if the
unknown matrix has rank $4$ but we are allowed to output any constant rank
matrix, and even if additionally we assume that the unknown matrix is
incoherent and are shown $90%$ of the entries. This result relies on the
conjectured hardness of the $4$-Coloring problem. We also consider the positive
semidefinite Matrix Completion problem. Here we show a similar hardness result
under the standard assumption that $\mathrm{P}\ne \mathrm{NP}.$
Our results greatly narrow the gap between existing feasibility results and
computational lower bounds. In particular, we believe that our results give the
first complexity-theoretic justification for why distributional assumptions are
needed beyond the incoherence assumption in order to obtain positive results.
On the technical side, we contribute several new ideas on how to encode hard
combinatorial problems in low-rank optimization problems. We hope that these
techniques will be helpful in further understanding the computational limits of
Matrix Completion and related problems. | computer science |
40,798 | Discretization of Temporal Data: A Survey | cs.DB | In real world, the huge amount of temporal data is to be processed in many
application areas such as scientific, financial, network monitoring, sensor
data analysis. Data mining techniques are primarily oriented to handle discrete
features. In the case of temporal data the time plays an important role on the
characteristics of data. To consider this effect, the data discretization
techniques have to consider the time while processing to resolve the issue by
finding the intervals of data which are more concise and precise with respect
to time. Here, this research is reviewing different data discretization
techniques used in temporal data applications according to the inclusion or
exclusion of: class label, temporal order of the data and handling of stream
data to open the research direction for temporal data discretization to improve
the performance of data mining technique. | computer science |
40,799 | Diffusion Least Mean Square: Simulations | cs.LG | In this technical report we analyse the performance of diffusion strategies
applied to the Least-Mean-Square adaptive filter. We configure a network of
cooperative agents running adaptive filters and discuss their behaviour when
compared with a non-cooperative agent which represents the average of the
network. The analysis provides conditions under which diversity in the filter
parameters is beneficial in terms of convergence and stability. Simulations
drive and support the analysis. | computer science |
40,800 | Open science in machine learning | cs.LG | We present OpenML and mldata, open science platforms that provides easy
access to machine learning data, software and results to encourage further
study and application. They go beyond the more traditional repositories for
data sets and software packages in that they allow researchers to also easily
share the results they obtained in experiments and to compare their solutions
with those of others. | computer science |
40,801 | Oracle-Based Robust Optimization via Online Learning | math.OC | Robust optimization is a common framework in optimization under uncertainty
when the problem parameters are not known, but it is rather known that the
parameters belong to some given uncertainty set. In the robust optimization
framework the problem solved is a min-max problem where a solution is judged
according to its performance on the worst possible realization of the
parameters. In many cases, a straightforward solution of the robust
optimization problem of a certain type requires solving an optimization problem
of a more complicated type, and in some cases even NP-hard. For example,
solving a robust conic quadratic program, such as those arising in robust SVM,
ellipsoidal uncertainty leads in general to a semidefinite program. In this
paper we develop a method for approximately solving a robust optimization
problem using tools from online convex optimization, where in every stage a
standard (non-robust) optimization program is solved. Our algorithms find an
approximate robust solution using a number of calls to an oracle that solves
the original (non-robust) problem that is inversely proportional to the square
of the target accuracy. | computer science |
40,802 | Outlier Detection using Improved Genetic K-means | cs.LG | The outlier detection problem in some cases is similar to the classification
problem. For example, the main concern of clustering-based outlier detection
algorithms is to find clusters and outliers, which are often regarded as noise
that should be removed in order to make more reliable clustering. In this
article, we present an algorithm that provides outlier detection and data
clustering simultaneously. The algorithmimprovesthe estimation of centroids of
the generative distribution during the process of clustering and outlier
discovery. The proposed algorithm consists of two stages. The first stage
consists of improved genetic k-means algorithm (IGK) process, while the second
stage iteratively removes the vectors which are far from their cluster
centroids. | computer science |
40,803 | Data-driven HRF estimation for encoding and decoding models | cs.CE | Despite the common usage of a canonical, data-independent, hemodynamic
response function (HRF), it is known that the shape of the HRF varies across
brain regions and subjects. This suggests that a data-driven estimation of this
function could lead to more statistical power when modeling BOLD fMRI data.
However, unconstrained estimation of the HRF can yield highly unstable results
when the number of free parameters is large. We develop a method for the joint
estimation of activation and HRF using a rank constraint causing the estimated
HRF to be equal across events/conditions, yet permitting it to be different
across voxels. Model estimation leads to an optimization problem that we
propose to solve with an efficient quasi-Newton method exploiting fast gradient
computations. This model, called GLM with Rank-1 constraint (R1-GLM), can be
extended to the setting of GLM with separate designs which has been shown to
improve decoding accuracy in brain activity decoding experiments. We compare 10
different HRF modeling methods in terms of encoding and decoding score in two
different datasets. Our results show that the R1-GLM model significantly
outperforms competing methods in both encoding and decoding settings,
positioning it as an attractive method both from the points of view of accuracy
and computational efficiency. | computer science |
40,804 | Real-time Topic-aware Influence Maximization Using Preprocessing | cs.SI | Influence maximization is the task of finding a set of seed nodes in a social
network such that the influence spread of these seed nodes based on certain
influence diffusion model is maximized. Topic-aware influence diffusion models
have been recently proposed to address the issue that influence between a pair
of users are often topic-dependent and information, ideas, innovations etc.
being propagated in networks (referred collectively as items in this paper) are
typically mixtures of topics. In this paper, we focus on the topic-aware
influence maximization task. In particular, we study preprocessing methods for
these topics to avoid redoing influence maximization for each item from
scratch. We explore two preprocessing algorithms with theoretical
justifications. Our empirical results on data obtained in a couple of existing
studies demonstrate that one of our algorithms stands out as a strong candidate
providing microsecond online response time and competitive influence spread,
with reasonable preprocessing effort. | computer science |
40,805 | Network Traffic Decomposition for Anomaly Detection | cs.LG | In this paper we focus on the detection of network anomalies like Denial of
Service (DoS) attacks and port scans in a unified manner. While there has been
an extensive amount of research in network anomaly detection, current state of
the art methods are only able to detect one class of anomalies at the cost of
others. The key tool we will use is based on the spectral decomposition of a
trajectory/hankel matrix which is able to detect deviations from both between
and within correlation present in the observed network traffic data. Detailed
experiments on synthetic and real network traces shows a significant
improvement in detection capability over competing approaches. In the process
we also address the issue of robustness of anomaly detection systems in a
principled fashion. | computer science |
40,806 | An Extensive Repot on the Efficiency of AIS-INMACA (A Novel Integrated
MACA based Clonal Classifier for Protein Coding and Promoter Region
Prediction) | cs.CE | This paper exclusively reports the efficiency of AIS-INMACA. AIS-INMACA has
created good impact on solving major problems in bioinformatics like protein
region identification and promoter region prediction with less time (Pokkuluri
Kiran Sree, 2014). This AIS-INMACA is now came with several variations
(Pokkuluri Kiran Sree, 2014) towards projecting it as a tool in bioinformatics
for solving many problems in bioinformatics. So this paper will be very much
useful for so many researchers who are working in the domain of bioinformatics
with cellular automata. | computer science |
40,807 | Statistical Structure Learning, Towards a Robust Smart Grid | cs.LG | Robust control and maintenance of the grid relies on accurate data. Both PMUs
and state estimators are prone to false data injection attacks. Thus, it is
crucial to have a mechanism for fast and accurate detection of an agent
maliciously tampering with the data---for both preventing attacks that may lead
to blackouts, and for routine monitoring and control tasks of current and
future grids. We propose a decentralized false data injection detection scheme
based on Markov graph of the bus phase angles. We utilize the Conditional
Covariance Test (CCT) to learn the structure of the grid. Using the DC power
flow model, we show that under normal circumstances, and because of
walk-summability of the grid graph, the Markov graph of the voltage angles can
be determined by the power grid graph. Therefore, a discrepancy between
calculated Markov graph and learned structure should trigger the alarm. Local
grid topology is available online from the protection system and we exploit it
to check for mismatch. Should a mismatch be detected, we use correlation
anomaly score to detect the set of attacked nodes. Our method can detect the
most recent stealthy deception attack on the power grid that assumes knowledge
of bus-branch model of the system and is capable of deceiving the state
estimator, damaging power network observatory, control, monitoring, demand
response and pricing schemes. Specifically, under the stealthy deception
attack, the Markov graph of phase angles changes. In addition to detect a state
of attack, our method can detect the set of attacked nodes. To the best of our
knowledge, our remedy is the first to comprehensively detect this sophisticated
attack and it does not need additional hardware. Moreover, our detection scheme
is successful no matter the size of the attacked subset. Simulation of various
power networks confirms our claims. | computer science |
40,808 | Combination of PCA with SMOTE Resampling to Boost the Prediction Rate in
Lung Cancer Dataset | cs.LG | Classification algorithms are unable to make reliable models on the datasets
with huge sizes. These datasets contain many irrelevant and redundant features
that mislead the classifiers. Furthermore, many huge datasets have imbalanced
class distribution which leads to bias over majority class in the
classification process. In this paper combination of unsupervised
dimensionality reduction methods with resampling is proposed and the results
are tested on Lung-Cancer dataset. In the first step PCA is applied on
Lung-Cancer dataset to compact the dataset and eliminate irrelevant features
and in the second step SMOTE resampling is carried out to balance the class
distribution and increase the variety of sample domain. Finally, Naive Bayes
classifier is applied on the resulting dataset and the results are compared and
evaluation metrics are calculated. The experiments show the effectiveness of
the proposed method across four evaluation metrics: Overall accuracy, False
Positive Rate, Precision, Recall. | computer science |
40,809 | Transfer Learning across Networks for Collective Classification | cs.LG | This paper addresses the problem of transferring useful knowledge from a
source network to predict node labels in a newly formed target network. While
existing transfer learning research has primarily focused on vector-based data,
in which the instances are assumed to be independent and identically
distributed, how to effectively transfer knowledge across different information
networks has not been well studied, mainly because networks may have their
distinct node features and link relationships between nodes. In this paper, we
propose a new transfer learning algorithm that attempts to transfer common
latent structure features across the source and target networks. The proposed
algorithm discovers these latent features by constructing label propagation
matrices in the source and target networks, and mapping them into a shared
latent feature space. The latent features capture common structure patterns
shared by two networks, and serve as domain-independent features to be
transferred between networks. Together with domain-dependent node features, we
thereafter propose an iterative classification algorithm that leverages label
correlations to predict node labels in the target network. Experiments on
real-world networks demonstrate that our proposed algorithm can successfully
achieve knowledge transfer between networks to help improve the accuracy of
classifying nodes in the target network. | computer science |
Subsets and Splits