Unnamed: 0
int64 0
41k
| title
stringlengths 4
274
| category
stringlengths 5
18
| summary
stringlengths 22
3.66k
| theme
stringclasses 8
values |
---|---|---|---|---|
40,610 | Structured Sparsity Models for Multiparty Speech Recovery from
Reverberant Recordings | cs.LG | We tackle the multi-party speech recovery problem through modeling the
acoustic of the reverberant chambers. Our approach exploits structured sparsity
models to perform room modeling and speech recovery. We propose a scheme for
characterizing the room acoustic from the unknown competing speech sources
relying on localization of the early images of the speakers by sparse
approximation of the spatial spectra of the virtual sources in a free-space
model. The images are then clustered exploiting the low-rank structure of the
spectro-temporal components belonging to each source. This enables us to
identify the early support of the room impulse response function and its unique
map to the room geometry. To further tackle the ambiguity of the reflection
ratios, we propose a novel formulation of the reverberation model and estimate
the absorption coefficients through a convex optimization exploiting joint
sparsity model formulated upon spatio-spectral sparsity of concurrent speech
representation. The acoustic parameters are then incorporated for separating
individual speech signals through either structured sparse recovery or inverse
filtering the acoustic channels. The experiments conducted on real data
recordings demonstrate the effectiveness of the proposed approach for
multi-party speech recovery and recognition. | computer science |
40,611 | Predicting Near-Future Churners and Win-Backs in the Telecommunications
Industry | cs.CE | In this work, we presented the strategies and techniques that we have
developed for predicting the near-future churners and win-backs for a telecom
company. On a large-scale and real-world database containing customer profiles
and some transaction data from a telecom company, we first analyzed the data
schema, developed feature computation strategies and then extracted a large set
of relevant features that can be associated with the customer churning and
returning behaviors. Our features include both the original driver factors as
well as some derived features. We evaluated our features on the imbalance
corrected dataset, i.e. under-sampled dataset and compare a large number of
existing machine learning tools, especially decision tree-based classifiers,
for predicting the churners and win-backs. In general, we find RandomForest and
SimpleCart learning algorithms generally perform well and tend to provide us
with highly competitive prediction performance. Among the top-15 driver factors
that signal the churn behavior, we find that the service utilization, e.g. last
two months' download and upload volume, last three months' average upload and
download, and the payment related factors are the most indicative features for
predicting if churn will happen soon. Such features can collectively tell
discrepancies between the service plans, payments and the dynamically changing
utilization needs of the customers. Our proposed features and their
computational strategy exhibit reasonable precision performance to predict
churn behavior in near future. | computer science |
40,612 | A Game-theoretic Machine Learning Approach for Revenue Maximization in
Sponsored Search | cs.GT | Sponsored search is an important monetization channel for search engines, in
which an auction mechanism is used to select the ads shown to users and
determine the prices charged from advertisers. There have been several pieces
of work in the literature that investigate how to design an auction mechanism
in order to optimize the revenue of the search engine. However, due to some
unrealistic assumptions used, the practical values of these studies are not
very clear. In this paper, we propose a novel \emph{game-theoretic machine
learning} approach, which naturally combines machine learning and game theory,
and learns the auction mechanism using a bilevel optimization framework. In
particular, we first learn a Markov model from historical data to describe how
advertisers change their bids in response to an auction mechanism, and then for
any given auction mechanism, we use the learnt model to predict its
corresponding future bid sequences. Next we learn the auction mechanism through
empirical revenue maximization on the predicted bid sequences. We show that the
empirical revenue will converge when the prediction period approaches infinity,
and a Genetic Programming algorithm can effectively optimize this empirical
revenue. Our experiments indicate that the proposed approach is able to produce
a much more effective auction mechanism than several baselines. | computer science |
40,613 | Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets | math.OC | The Frank-Wolfe method (a.k.a. conditional gradient algorithm) for smooth
optimization has regained much interest in recent years in the context of large
scale optimization and machine learning. A key advantage of the method is that
it avoids projections - the computational bottleneck in many applications -
replacing it by a linear optimization step. Despite this advantage, the known
convergence rates of the FW method fall behind standard first order methods for
most settings of interest. It is an active line of research to derive faster
linear optimization-based algorithms for various settings of convex
optimization.
In this paper we consider the special case of optimization over strongly
convex sets, for which we prove that the vanila FW method converges at a rate
of $\frac{1}{t^2}$. This gives a quadratic improvement in convergence rate
compared to the general case, in which convergence is of the order
$\frac{1}{t}$, and known to be tight. We show that various balls induced by
$\ell_p$ norms, Schatten norms and group norms are strongly convex on one hand
and on the other hand, linear optimization over these sets is straightforward
and admits a closed-form solution. We further show how several previous
fast-rate results for the FW method follow easily from our analysis. | computer science |
40,614 | Machine learning approach for text and document mining | cs.IR | Text Categorization (TC), also known as Text Classification, is the task of
automatically classifying a set of text documents into different categories
from a predefined set. If a document belongs to exactly one of the categories,
it is a single-label classification task; otherwise, it is a multi-label
classification task. TC uses several tools from Information Retrieval (IR) and
Machine Learning (ML) and has received much attention in the last years from
both researchers in the academia and industry developers. In this paper, we
first categorize the documents using KNN based machine learning approach and
then return the most relevant documents. | computer science |
40,615 | Computational role of eccentricity dependent cortical magnification | cs.LG | We develop a sampling extension of M-theory focused on invariance to scale
and translation. Quite surprisingly, the theory predicts an architecture of
early vision with increasing receptive field sizes and a high resolution fovea
-- in agreement with data about the cortical magnification factor, V1 and the
retina. From the slope of the inverse of the magnification factor, M-theory
predicts a cortical "fovea" in V1 in the order of $40$ by $40$ basic units at
each receptive field size -- corresponding to a foveola of size around $26$
minutes of arc at the highest resolution, $\approx 6$ degrees at the lowest
resolution. It also predicts uniform scale invariance over a fixed range of
scales independently of eccentricity, while translation invariance should
depend linearly on spatial frequency. Bouma's law of crowding follows in the
theory as an effect of cortical area-by-cortical area pooling; the Bouma
constant is the value expected if the signature responsible for recognition in
the crowding experiments originates in V2. From a broader perspective, the
emerging picture suggests that visual recognition under natural conditions
takes place by composing information from a set of fixations, with each
fixation providing recognition from a space-scale image fragment -- that is an
image patch represented at a set of increasing sizes and decreasing
resolutions. | computer science |
40,616 | Memristor models for machine learning | cs.LG | In the quest for alternatives to traditional CMOS, it is being suggested that
digital computing efficiency and power can be improved by matching the
precision to the application. Many applications do not need the high precision
that is being used today. In particular, large gains in area- and power
efficiency could be achieved by dedicated analog realizations of approximate
computing engines. In this work, we explore the use of memristor networks for
analog approximate computation, based on a machine learning framework called
reservoir computing. Most experimental investigations on the dynamics of
memristors focus on their nonvolatile behavior. Hence, the volatility that is
present in the developed technologies is usually unwanted and it is not
included in simulation models. In contrast, in reservoir computing, volatility
is not only desirable but necessary. Therefore, in this work, we propose two
different ways to incorporate it into memristor simulation models. The first is
an extension of Strukov's model and the second is an equivalent Wiener model
approximation. We analyze and compare the dynamical properties of these models
and discuss their implications for the memory and the nonlinear processing
capacity of memristor networks. Our results indicate that device variability,
increasingly causing problems in traditional computer design, is an asset in
the context of reservoir computing. We conclude that, although both models
could lead to useful memristor based reservoir computing systems, their
computational performance will differ. Therefore, experimental modeling
research is required for the development of accurate volatile memristor models. | computer science |
40,617 | Budget-Constrained Item Cold-Start Handling in Collaborative Filtering
Recommenders via Optimal Design | cs.IR | It is well known that collaborative filtering (CF) based recommender systems
provide better modeling of users and items associated with considerable rating
history. The lack of historical ratings results in the user and the item
cold-start problems. The latter is the main focus of this work. Most of the
current literature addresses this problem by integrating content-based
recommendation techniques to model the new item. However, in many cases such
content is not available, and the question arises is whether this problem can
be mitigated using CF techniques only. We formalize this problem as an
optimization problem: given a new item, a pool of available users, and a budget
constraint, select which users to assign with the task of rating the new item
in order to minimize the prediction error of our model. We show that the
objective function is monotone-supermodular, and propose efficient optimal
design based algorithms that attain an approximation to its optimum. Our
findings are verified by an empirical study using the Netflix dataset, where
the proposed algorithms outperform several baselines for the problem at hand. | computer science |
40,618 | Quaternion Gradient and Hessian | math.NA | The optimization of real scalar functions of quaternion variables, such as
the mean square error or array output power, underpins many practical
applications. Solutions often require the calculation of the gradient and
Hessian, however, real functions of quaternion variables are essentially
non-analytic. To address this issue, we propose new definitions of quaternion
gradient and Hessian, based on the novel generalized HR (GHR) calculus, thus
making possible efficient derivation of optimization algorithms directly in the
quaternion field, rather than transforming the problem to the real domain, as
is current practice. In addition, unlike the existing quaternion gradients, the
GHR calculus allows for the product and chain rule, and for a one-to-one
correspondence of the proposed quaternion gradient and Hessian with their real
counterparts. Properties of the quaternion gradient and Hessian relevant to
numerical applications are elaborated, and the results illuminate the
usefulness of the GHR calculus in greatly simplifying the derivation of the
quaternion least mean squares, and in quaternion least square and Newton
algorithm. The proposed gradient and Hessian are also shown to enable the same
generic forms as the corresponding real- and complex-valued algorithms, further
illustrating the advantages in algorithm design and evaluation. | computer science |
40,619 | Interval Forecasting of Electricity Demand: A Novel Bivariate EMD-based
Support Vector Regression Modeling Framework | cs.LG | Highly accurate interval forecasting of electricity demand is fundamental to
the success of reducing the risk when making power system planning and
operational decisions by providing a range rather than point estimation. In
this study, a novel modeling framework integrating bivariate empirical mode
decomposition (BEMD) and support vector regression (SVR), extended from the
well-established empirical mode decomposition (EMD) based time series modeling
framework in the energy demand forecasting literature, is proposed for interval
forecasting of electricity demand. The novelty of this study arises from the
employment of BEMD, a new extension of classical empirical model decomposition
(EMD) destined to handle bivariate time series treated as complex-valued time
series, as decomposition method instead of classical EMD only capable of
decomposing one-dimensional single-valued time series. This proposed modeling
framework is endowed with BEMD to decompose simultaneously both the lower and
upper bounds time series, constructed in forms of complex-valued time series,
of electricity demand on a monthly per hour basis, resulting in capturing the
potential interrelationship between lower and upper bounds. The proposed
modeling framework is justified with monthly interval-valued electricity demand
data per hour in Pennsylvania-New Jersey-Maryland Interconnection, indicating
it as a promising method for interval-valued electricity demand forecasting. | computer science |
40,620 | Learning An Invariant Speech Representation | cs.SD | Recognition of speech, and in particular the ability to generalize and learn
from small sets of labelled examples like humans do, depends on an appropriate
representation of the acoustic input. We formulate the problem of finding
robust speech features for supervised learning with small sample complexity as
a problem of learning representations of the signal that are maximally
invariant to intraclass transformations and deformations. We propose an
extension of a theory for unsupervised learning of invariant visual
representations to the auditory domain and empirically evaluate its validity
for voiced speech sound classification. Our version of the theory requires the
memory-based, unsupervised storage of acoustic templates -- such as specific
phones or words -- together with all the transformations of each that normally
occur. A quasi-invariant representation for a speech segment can be obtained by
projecting it to each template orbit, i.e., the set of transformed signals, and
computing the associated one-dimensional empirical probability distributions.
The computations can be performed by modules of filtering and pooling, and
extended to hierarchical architectures. In this paper, we apply a single-layer,
multicomponent representation for phonemes and demonstrate improved accuracy
and decreased sample complexity for vowel classification compared to standard
spectral, cepstral and perceptual features. | computer science |
40,621 | Construction of non-convex polynomial loss functions for training a
binary classifier with quantum annealing | cs.LG | Quantum annealing is a heuristic quantum algorithm which exploits quantum
resources to minimize an objective function embedded as the energy levels of a
programmable physical system. To take advantage of a potential quantum
advantage, one needs to be able to map the problem of interest to the native
hardware with reasonably low overhead. Because experimental considerations
constrain our objective function to take the form of a low degree PUBO
(polynomial unconstrained binary optimization), we employ non-convex loss
functions which are polynomial functions of the margin. We show that these loss
functions are robust to label noise and provide a clear advantage over convex
methods. These loss functions may also be useful for classical approaches as
they compile to regularized risk expressions which can be evaluated in constant
time with respect to the number of training examples. | computer science |
40,622 | Predictive Modelling of Bone Age through Classification and Regression
of Bone Shapes | cs.LG | Bone age assessment is a task performed daily in hospitals worldwide. This
involves a clinician estimating the age of a patient from a radiograph of the
non-dominant hand.
Our approach to automated bone age assessment is to modularise the algorithm
into the following three stages: segment and verify hand outline; segment and
verify bones; use the bone outlines to construct models of age. In this paper
we address the final question: given outlines of bones, can we learn how to
predict the bone age of the patient? We examine two alternative approaches.
Firstly, we attempt to train classifiers on individual bones to predict the
bone stage categories commonly used in bone ageing. Secondly, we construct
regression models to directly predict patient age.
We demonstrate that models built on summary features of the bone outline
perform better than those built using the one dimensional representation of the
outline, and also do at least as well as other automated systems. We show that
models constructed on just three bones are as accurate at predicting age as
expert human assessors using the standard technique. We also demonstrate the
utility of the model by quantifying the importance of ethnicity and sex on age
development. Our conclusion is that the feature based system of separating the
image processing from the age modelling is the best approach for automated bone
ageing, since it offers flexibility and transparency and produces accurate
estimates. | computer science |
40,623 | Homotopy based algorithms for $\ell_0$-regularized least-squares | cs.NA | Sparse signal restoration is usually formulated as the minimization of a
quadratic cost function $\|y-Ax\|_2^2$, where A is a dictionary and x is an
unknown sparse vector. It is well-known that imposing an $\ell_0$ constraint
leads to an NP-hard minimization problem. The convex relaxation approach has
received considerable attention, where the $\ell_0$-norm is replaced by the
$\ell_1$-norm. Among the many efficient $\ell_1$ solvers, the homotopy
algorithm minimizes $\|y-Ax\|_2^2+\lambda\|x\|_1$ with respect to x for a
continuum of $\lambda$'s. It is inspired by the piecewise regularity of the
$\ell_1$-regularization path, also referred to as the homotopy path. In this
paper, we address the minimization problem $\|y-Ax\|_2^2+\lambda\|x\|_0$ for a
continuum of $\lambda$'s and propose two heuristic search algorithms for
$\ell_0$-homotopy. Continuation Single Best Replacement is a forward-backward
greedy strategy extending the Single Best Replacement algorithm, previously
proposed for $\ell_0$-minimization at a given $\lambda$. The adaptive search of
the $\lambda$-values is inspired by $\ell_1$-homotopy. $\ell_0$ Regularization
Path Descent is a more complex algorithm exploiting the structural properties
of the $\ell_0$-regularization path, which is piecewise constant with respect
to $\lambda$. Both algorithms are empirically evaluated for difficult inverse
problems involving ill-conditioned dictionaries. Finally, we show that they can
be easily coupled with usual methods of model order selection. | computer science |
40,624 | Fast Support Vector Machines Using Parallel Adaptive Shrinking on
Distributed Systems | cs.DC | Support Vector Machines (SVM), a popular machine learning technique, has been
applied to a wide range of domains such as science, finance, and social
networks for supervised learning. Whether it is identifying high-risk patients
by health-care professionals, or potential high-school students to enroll in
college by school districts, SVMs can play a major role for social good. This
paper undertakes the challenge of designing a scalable parallel SVM training
algorithm for large scale systems, which includes commodity multi-core
machines, tightly connected supercomputers and cloud computing systems.
Intuitive techniques for improving the time-space complexity including adaptive
elimination of samples for faster convergence and sparse format representation
are proposed. Under sample elimination, several heuristics for {\em earliest
possible} to {\em lazy} elimination of non-contributing samples are proposed.
In several cases, where an early sample elimination might result in a false
positive, low overhead mechanisms for reconstruction of key data structures are
proposed. The algorithm and heuristics are implemented and evaluated on various
publicly available datasets. Empirical evaluation shows up to 26x speed
improvement on some datasets against the sequential baseline, when evaluated on
multiple compute nodes, and an improvement in execution time up to 30-60\% is
readily observed on a number of other datasets against our parallel baseline. | computer science |
40,625 | Constant Factor Approximation for Balanced Cut in the PIE model | cs.DS | We propose and study a new semi-random semi-adversarial model for Balanced
Cut, a planted model with permutation-invariant random edges (PIE). Our model
is much more general than planted models considered previously. Consider a set
of vertices V partitioned into two clusters $L$ and $R$ of equal size. Let $G$
be an arbitrary graph on $V$ with no edges between $L$ and $R$. Let
$E_{random}$ be a set of edges sampled from an arbitrary permutation-invariant
distribution (a distribution that is invariant under permutation of vertices in
$L$ and in $R$). Then we say that $G + E_{random}$ is a graph with
permutation-invariant random edges.
We present an approximation algorithm for the Balanced Cut problem that finds
a balanced cut of cost $O(|E_{random}|) + n \text{polylog}(n)$ in this model.
In the regime when $|E_{random}| = \Omega(n \text{polylog}(n))$, this is a
constant factor approximation with respect to the cost of the planted cut. | computer science |
40,626 | Correlation Clustering with Noisy Partial Information | cs.DS | In this paper, we propose and study a semi-random model for the Correlation
Clustering problem on arbitrary graphs G. We give two approximation algorithms
for Correlation Clustering instances from this model. The first algorithm finds
a solution of value $(1+ \delta) optcost + O_{\delta}(n\log^3 n)$ with high
probability, where $optcost$ is the value of the optimal solution (for every
$\delta > 0$). The second algorithm finds the ground truth clustering with an
arbitrarily small classification error $\eta$ (under some additional
assumptions on the instance). | computer science |
40,627 | Active Learning and Best-Response Dynamics | cs.LG | We examine an important setting for engineered systems in which low-power
distributed sensors are each making highly noisy measurements of some unknown
target function. A center wants to accurately learn this function by querying a
small number of sensors, which ordinarily would be impossible due to the high
noise rate. The question we address is whether local communication among
sensors, together with natural best-response dynamics in an
appropriately-defined game, can denoise the system without destroying the true
signal and allow the center to succeed from only a small number of active
queries. By using techniques from game theory and empirical processes, we prove
positive (and negative) results on the denoising power of several natural
dynamics. We then show experimentally that when combined with recent agnostic
active learning algorithms, this process can achieve low error from very few
queries, performing substantially better than active or passive learning
without these denoising dynamics as well as passive learning with denoising. | computer science |
40,628 | An Incentive Compatible Multi-Armed-Bandit Crowdsourcing Mechanism with
Quality Assurance | cs.GT | Consider a requester who wishes to crowdsource a series of identical binary
labeling tasks to a pool of workers so as to achieve an assured accuracy for
each task, in a cost optimal way. The workers are heterogeneous with unknown
but fixed qualities and their costs are private. The problem is to select for
each task an optimal subset of workers so that the outcome obtained from the
selected workers guarantees a target accuracy level. The problem is a
challenging one even in a non strategic setting since the accuracy of
aggregated label depends on unknown qualities. We develop a novel multi-armed
bandit (MAB) mechanism for solving this problem. First, we propose a framework,
Assured Accuracy Bandit (AAB), which leads to an MAB algorithm, Constrained
Confidence Bound for a Non Strategic setting (CCB-NS). We derive an upper bound
on the number of time steps the algorithm chooses a sub-optimal set that
depends on the target accuracy level and true qualities. A more challenging
situation arises when the requester not only has to learn the qualities of the
workers but also elicit their true costs. We modify the CCB-NS algorithm to
obtain an adaptive exploration separated algorithm which we call { \em
Constrained Confidence Bound for a Strategic setting (CCB-S)}. CCB-S algorithm
produces an ex-post monotone allocation rule and thus can be transformed into
an ex-post incentive compatible and ex-post individually rational mechanism
that learns the qualities of the workers and guarantees a given target accuracy
level in a cost optimal way. We provide a lower bound on the number of times
any algorithm should select a sub-optimal set and we see that the lower bound
matches our upper bound upto a constant factor. We provide insights on the
practical implementation of this framework through an illustrative example and
we show the efficacy of our algorithms through simulations. | computer science |
40,629 | Stock Market Prediction from WSJ: Text Mining via Sparse Matrix
Factorization | cs.LG | We revisit the problem of predicting directional movements of stock prices
based on news articles: here our algorithm uses daily articles from The Wall
Street Journal to predict the closing stock prices on the same day. We propose
a unified latent space model to characterize the "co-movements" between stock
prices and news articles. Unlike many existing approaches, our new model is
able to simultaneously leverage the correlations: (a) among stock prices, (b)
among news articles, and (c) between stock prices and news articles. Thus, our
model is able to make daily predictions on more than 500 stocks (most of which
are not even mentioned in any news article) while having low complexity. We
carry out extensive backtesting on trading strategies based on our algorithm.
The result shows that our model has substantially better accuracy rate (55.7%)
compared to many widely used algorithms. The return (56%) and Sharpe ratio due
to a trading strategy based on our model are also much higher than baseline
indices. | computer science |
40,630 | Quantum adiabatic machine learning | cs.LG | We develop an approach to machine learning and anomaly detection via quantum
adiabatic evolution. In the training phase we identify an optimal set of weak
classifiers, to form a single strong classifier. In the testing phase we
adiabatically evolve one or more strong classifiers on a superposition of
inputs in order to find certain anomalous elements in the classification space.
Both the training and testing phases are executed via quantum adiabatic
evolution. We apply and illustrate this approach in detail to the problem of
software verification and validation. | computer science |
40,631 | The Variational Garrote | stat.ME | In this paper, we present a new variational method for sparse regression
using $L_0$ regularization. The variational parameters appear in the
approximate model in a way that is similar to Breiman's Garrote model. We refer
to this method as the variational Garrote (VG). We show that the combination of
the variational approximation and $L_0$ regularization has the effect of making
the problem effectively of maximal rank even when the number of samples is
small compared to the number of variables. The VG is compared numerically with
the Lasso method, ridge regression and the recently introduced paired mean
field method (PMF) (M. Titsias & M. L\'azaro-Gredilla., NIPS 2012). Numerical
results show that the VG and PMF yield more accurate predictions and more
accurately reconstruct the true model than the other methods. It is shown that
the VG finds correct solutions when the Lasso solution is inconsistent due to
large input correlations. Globally, VG is significantly faster than PMF and
tends to perform better as the problems become denser and in problems with
strongly correlated inputs. The naive implementation of the VG scales cubic
with the number of features. By introducing Lagrange multipliers we obtain a
dual formulation of the problem that scales cubic in the number of samples, but
close to linear in the number of features. | computer science |
40,632 | How Open Should Open Source Be? | cs.CR | Many open-source projects land security fixes in public repositories before
shipping these patches to users. This paper presents attacks on such projects -
taking Firefox as a case-study - that exploit patch metadata to efficiently
search for security patches prior to shipping. Using access-restricted bug
reports linked from patch descriptions, security patches can be immediately
identified for 260 out of 300 days of Firefox 3 development. In response to
Mozilla obfuscating descriptions, we show that machine learning can exploit
metadata such as patch author to search for security patches, extending the
total window of vulnerability by 5 months in an 8 month period when examining
up to two patches daily. Finally we present strong evidence that further
metadata obfuscation is unlikely to prevent information leaks, and we argue
that open-source projects instead ought to keep security patches secret until
they are ready to be released. | computer science |
40,633 | Gossip Learning with Linear Models on Fully Distributed Data | cs.LG | Machine learning over fully distributed data poses an important problem in
peer-to-peer (P2P) applications. In this model we have one data record at each
network node, but without the possibility to move raw data due to privacy
considerations. For example, user profiles, ratings, history, or sensor
readings can represent this case. This problem is difficult, because there is
no possibility to learn local models, the system model offers almost no
guarantees for reliability, yet the communication cost needs to be kept low.
Here we propose gossip learning, a generic approach that is based on multiple
models taking random walks over the network in parallel, while applying an
online learning algorithm to improve themselves, and getting combined via
ensemble learning methods. We present an instantiation of this approach for the
case of classification with linear models. Our main contribution is an ensemble
learning method which---through the continuous combination of the models in the
network---implements a virtual weighted voting mechanism over an exponential
number of models at practically no extra cost as compared to independent random
walks. We prove the convergence of the method theoretically, and perform
extensive experiments on benchmark datasets. Our experimental analysis
demonstrates the performance and robustness of the proposed approach. | computer science |
40,634 | Anomaly Sequences Detection from Logs Based on Compression | cs.LG | Mining information from logs is an old and still active research topic. In
recent years, with the rapid emerging of cloud computing, log mining becomes
increasingly important to industry. This paper focus on one major mission of
log mining: anomaly detection, and proposes a novel method for mining abnormal
sequences from large logs. Different from previous anomaly detection systems
which based on statistics, probabilities and Markov assumption, our approach
measures the strangeness of a sequence using compression. It first trains a
grammar about normal behaviors using grammar-based compression, then measures
the information quantities and densities of questionable sequences according to
incrementation of grammar length. We have applied our approach on mining some
real bugs from fine grained execution logs. We have also tested its ability on
intrusion detection using some publicity available system call traces. The
experiments show that our method successfully selects the strange sequences
which related to bugs or attacking. | computer science |
40,635 | Convergence Rates of Inexact Proximal-Gradient Methods for Convex
Optimization | cs.LG | We consider the problem of optimizing the sum of a smooth convex function and
a non-smooth convex function using proximal-gradient methods, where an error is
present in the calculation of the gradient of the smooth term or in the
proximity operator with respect to the non-smooth term. We show that both the
basic proximal-gradient method and the accelerated proximal-gradient method
achieve the same convergence rate as in the error-free case, provided that the
errors decrease at appropriate rates.Using these rates, we perform as well as
or better than a carefully chosen fixed error level on a set of structured
sparsity problems. | computer science |
40,636 | Making Gradient Descent Optimal for Strongly Convex Stochastic
Optimization | cs.LG | Stochastic gradient descent (SGD) is a simple and popular method to solve
stochastic optimization problems which arise in machine learning. For strongly
convex problems, its convergence rate was known to be O(\log(T)/T), by running
SGD for T iterations and returning the average point. However, recent results
showed that using a different algorithm, one can get an optimal O(1/T) rate.
This might lead one to believe that standard SGD is suboptimal, and maybe
should even be replaced as a method of choice. In this paper, we investigate
the optimality of SGD in a stochastic setting. We show that for smooth
problems, the algorithm attains the optimal O(1/T) rate. However, for
non-smooth problems, the convergence rate with averaging might really be
\Omega(\log(T)/T), and this is not just an artifact of the analysis. On the
flip side, we show that a simple modification of the averaging step suffices to
recover the O(1/T) rate, and no other change of the algorithm is necessary. We
also present experimental results which support our findings, and point out
open problems. | computer science |
40,637 | Deterministic Feature Selection for $k$-means Clustering | cs.LG | We study feature selection for $k$-means clustering. Although the literature
contains many methods with good empirical performance, algorithms with provable
theoretical behavior have only recently been developed. Unfortunately, these
algorithms are randomized and fail with, say, a constant probability. We
address this issue by presenting a deterministic feature selection algorithm
for k-means with theoretical guarantees. At the heart of our algorithm lies a
deterministic method for decompositions of the identity. | computer science |
40,638 | ProPPA: A Fast Algorithm for $\ell_1$ Minimization and Low-Rank Matrix
Completion | cs.LG | We propose a Projected Proximal Point Algorithm (ProPPA) for solving a class
of optimization problems. The algorithm iteratively computes the proximal point
of the last estimated solution projected into an affine space which itself is
parallel and approaching to the feasible set. We provide convergence analysis
theoretically supporting the general algorithm, and then apply it for solving
$\ell_1$-minimization problems and the matrix completion problem. These
problems arise in many applications including machine learning, image and
signal processing. We compare our algorithm with the existing state-of-the-art
algorithms. Experimental results on solving these problems show that our
algorithm is very efficient and competitive. | computer science |
40,639 | Detecting Spammers via Aggregated Historical Data Set | cs.CR | The battle between email service providers and senders of mass unsolicited
emails (Spam) continues to gain traction. Vast numbers of Spam emails are sent
mainly from automatic botnets distributed over the world. One method for
mitigating Spam in a computationally efficient manner is fast and accurate
blacklisting of the senders. In this work we propose a new sender reputation
mechanism that is based on an aggregated historical data-set which encodes the
behavior of mail transfer agents over time. A historical data-set is created
from labeled logs of received emails. We use machine learning algorithms to
build a model that predicts the \emph{spammingness} of mail transfer agents in
the near future. The proposed mechanism is targeted mainly at large enterprises
and email service providers and can be used for updating both the black and the
white lists. We evaluate the proposed mechanism using 9.5M anonymized log
entries obtained from the biggest Internet service provider in Europe.
Experiments show that proposed method detects more than 94% of the Spam emails
that escaped the blacklist (i.e., TPR), while having less than 0.5%
false-alarms. Therefore, the effectiveness of the proposed method is much
higher than of previously reported reputation mechanisms, which rely on emails
logs. In addition, the proposed method, when used for updating both the black
and white lists, eliminated the need in automatic content inspection of 4 out
of 5 incoming emails, which resulted in dramatic reduction in the filtering
computational load. | computer science |
40,640 | Hamiltonian Annealed Importance Sampling for partition function
estimation | cs.LG | We introduce an extension to annealed importance sampling that uses
Hamiltonian dynamics to rapidly estimate normalization constants. We
demonstrate this method by computing log likelihoods in directed and undirected
probabilistic image models. We compare the performance of linear generative
models with both Gaussian and Laplace priors, product of experts models with
Laplace and Student's t experts, the mc-RBM, and a bilinear generative model.
We provide code to compare additional models. | computer science |
40,641 | The representer theorem for Hilbert spaces: a necessary and sufficient
condition | math.FA | A family of regularization functionals is said to admit a linear representer
theorem if every member of the family admits minimizers that lie in a fixed
finite dimensional subspace. A recent characterization states that a general
class of regularization functionals with differentiable regularizer admits a
linear representer theorem if and only if the regularization term is a
non-decreasing function of the norm. In this report, we improve over such
result by replacing the differentiability assumption with lower semi-continuity
and deriving a proof that is independent of the dimensionality of the space. | computer science |
40,642 | Hamiltonian Monte Carlo with Reduced Momentum Flips | cs.LG | Hamiltonian Monte Carlo (or hybrid Monte Carlo) with partial momentum
refreshment explores the state space more slowly than it otherwise would due to
the momentum reversals which occur on proposal rejection. These cause
trajectories to double back on themselves, leading to random walk behavior on
timescales longer than the typical rejection time, and leading to slower
mixing. I present a technique by which the number of momentum reversals can be
reduced. This is accomplished by maintaining the net exchange of probability
between states with opposite momenta, but reducing the rate of exchange in both
directions such that it is 0 in one direction. An experiment illustrates these
reduced momentum flips accelerating mixing for a particular distribution. | computer science |
40,643 | Ordinal Boltzmann Machines for Collaborative Filtering | cs.IR | Collaborative filtering is an effective recommendation technique wherein the
preference of an individual can potentially be predicted based on preferences
of other members. Early algorithms often relied on the strong locality in the
preference data, that is, it is enough to predict preference of a user on a
particular item based on a small subset of other users with similar tastes or
of other items with similar properties. More recently, dimensionality reduction
techniques have proved to be equally competitive, and these are based on the
co-occurrence patterns rather than locality. This paper explores and extends a
probabilistic model known as Boltzmann Machine for collaborative filtering
tasks. It seamlessly integrates both the similarity and co-occurrence in a
principled manner. In particular, we study parameterisation options to deal
with the ordinal nature of the preferences, and propose a joint modelling of
both the user-based and item-based processes. Experiments on moderate and
large-scale movie recommendation show that our framework rivals existing
well-known methods. | computer science |
40,644 | Censored Exploration and the Dark Pool Problem | cs.LG | We introduce and analyze a natural algorithm for multi-venue exploration from
censored data, which is motivated by the Dark Pool Problem of modern
quantitative finance. We prove that our algorithm converges in polynomial time
to a near-optimal allocation policy; prior results for similar problems in
stochastic inventory control guaranteed only asymptotic convergence and
examined variants in which each venue could be treated independently. Our
analysis bears a strong resemblance to that of efficient exploration/
exploitation schemes in the reinforcement learning literature. We describe an
extensive experimental evaluation of our algorithm on the Dark Pool Problem
using real trading data. | computer science |
40,645 | Density Sensitive Hashing | cs.IR | Nearest neighbors search is a fundamental problem in various research fields
like machine learning, data mining and pattern recognition. Recently,
hashing-based approaches, e.g., Locality Sensitive Hashing (LSH), are proved to
be effective for scalable high dimensional nearest neighbors search. Many
hashing algorithms found their theoretic root in random projection. Since these
algorithms generate the hash tables (projections) randomly, a large number of
hash tables (i.e., long codewords) are required in order to achieve both high
precision and recall. To address this limitation, we propose a novel hashing
algorithm called {\em Density Sensitive Hashing} (DSH) in this paper. DSH can
be regarded as an extension of LSH. By exploring the geometric structure of the
data, DSH avoids the purely random projections selection and uses those
projective functions which best agree with the distribution of the data.
Extensive experimental results on real-world data sets have shown that the
proposed method achieves better performance compared to the state-of-the-art
hashing approaches. | computer science |
40,646 | Malware Detection Module using Machine Learning Algorithms to Assist in
Centralized Security in Enterprise Networks | cs.CR | Malicious software is abundant in a world of innumerable computer users, who
are constantly faced with these threats from various sources like the internet,
local networks and portable drives. Malware is potentially low to high risk and
can cause systems to function incorrectly, steal data and even crash. Malware
may be executable or system library files in the form of viruses, worms,
Trojans, all aimed at breaching the security of the system and compromising
user privacy. Typically, anti-virus software is based on a signature definition
system which keeps updating from the internet and thus keeping track of known
viruses. While this may be sufficient for home-users, a security risk from a
new virus could threaten an entire enterprise network. This paper proposes a
new and more sophisticated antivirus engine that can not only scan files, but
also build knowledge and detect files as potential viruses. This is done by
extracting system API calls made by various normal and harmful executable, and
using machine learning algorithms to classify and hence, rank files on a scale
of security risk. While such a system is processor heavy, it is very effective
when used centrally to protect an enterprise network which maybe more prone to
such threats. | computer science |
40,647 | Universal Algorithm for Online Trading Based on the Method of
Calibration | cs.LG | We present a universal algorithm for online trading in Stock Market which
performs asymptotically at least as good as any stationary trading strategy
that computes the investment at each step using a fixed function of the side
information that belongs to a given RKHS (Reproducing Kernel Hilbert Space).
Using a universal kernel, we extend this result for any continuous stationary
strategy. In this learning process, a trader rationally chooses his gambles
using predictions made by a randomized well-calibrated algorithm. Our strategy
is based on Dawid's notion of calibration with more general checking rules and
on some modification of Kakade and Foster's randomized rounding algorithm for
computing the well-calibrated forecasts. We combine the method of randomized
calibration with Vovk's method of defensive forecasting in RKHS. Unlike the
statistical theory, no stochastic assumptions are made about the stock prices.
Our empirical results on historical markets provide strong evidence that this
type of technical trading can "beat the market" if transaction costs are
ignored. | computer science |
40,648 | Constrained Overcomplete Analysis Operator Learning for Cosparse Signal
Modelling | math.NA | We consider the problem of learning a low-dimensional signal model from a
collection of training samples. The mainstream approach would be to learn an
overcomplete dictionary to provide good approximations of the training samples
using sparse synthesis coefficients. This famous sparse model has a less well
known counterpart, in analysis form, called the cosparse analysis model. In
this new model, signals are characterised by their parsimony in a transformed
domain using an overcomplete (linear) analysis operator. We propose to learn an
analysis operator from a training corpus using a constrained optimisation
framework based on L1 optimisation. The reason for introducing a constraint in
the optimisation framework is to exclude trivial solutions. Although there is
no final answer here for which constraint is the most relevant constraint, we
investigate some conventional constraints in the model adaptation field and use
the uniformly normalised tight frame (UNTF) for this purpose. We then derive a
practical learning algorithm, based on projected subgradients and
Douglas-Rachford splitting technique, and demonstrate its ability to robustly
recover a ground truth analysis operator, when provided with a clean training
set, of sufficient size. We also find an analysis operator for images, using
some noisy cosparse signals, which is indeed a more realistic experiment. As
the derived optimisation problem is not a convex program, we often find a local
minimum using such variational methods. Some local optimality conditions are
derived for two different settings, providing preliminary theoretical support
for the well-posedness of the learning problem under appropriate conditions. | computer science |
40,649 | Diffusion Adaptation over Networks | cs.MA | Adaptive networks are well-suited to perform decentralized information
processing and optimization tasks and to model various types of self-organized
and complex behavior encountered in nature. Adaptive networks consist of a
collection of agents with processing and learning abilities. The agents are
linked together through a connection topology, and they cooperate with each
other through local interactions to solve distributed optimization, estimation,
and inference problems in real-time. The continuous diffusion of information
across the network enables agents to adapt their performance in relation to
streaming data and network conditions; it also results in improved adaptation
and learning performance relative to non-cooperative agents. This article
provides an overview of diffusion strategies for adaptation and learning over
networks. The article is divided into several sections: 1. Motivation; 2.
Mean-Square-Error Estimation; 3. Distributed Optimization via Diffusion
Strategies; 4. Adaptive Diffusion Strategies; 5. Performance of
Steepest-Descent Diffusion Strategies; 6. Performance of Adaptive Diffusion
Strategies; 7. Comparing the Performance of Cooperative Strategies; 8.
Selecting the Combination Weights; 9. Diffusion with Noisy Information
Exchanges; 10. Extensions and Further Considerations; Appendix A: Properties of
Kronecker Products; Appendix B: Graph Laplacian and Network Connectivity;
Appendix C: Stochastic Matrices; Appendix D: Block Maximum Norm; Appendix E:
Comparison with Consensus Strategies; References. | computer science |
40,650 | From Exact Learning to Computing Boolean Functions and Back Again | cs.LG | The goal of the paper is to relate complexity measures associated with the
evaluation of Boolean functions (certificate complexity, decision tree
complexity) and learning dimensions used to characterize exact learning
(teaching dimension, extended teaching dimension). The high level motivation is
to discover non-trivial relations between exact learning of an unknown concept
and testing whether an unknown concept is part of a concept class or not.
Concretely, the goal is to provide lower and upper bounds of complexity
measures for one problem type in terms of the other. | computer science |
40,651 | Streaming Algorithms for Pattern Discovery over Dynamically Changing
Event Sequences | cs.LG | Discovering frequent episodes over event sequences is an important data
mining task. In many applications, events constituting the data sequence arrive
as a stream, at furious rates, and recent trends (or frequent episodes) can
change and drift due to the dynamical nature of the underlying event generation
process. The ability to detect and track such the changing sets of frequent
episodes can be valuable in many application scenarios. Current methods for
frequent episode discovery are typically multipass algorithms, making them
unsuitable in the streaming context. In this paper, we propose a new streaming
algorithm for discovering frequent episodes over a window of recent events in
the stream. Our algorithm processes events as they arrive, one batch at a time,
while discovering the top frequent episodes over a window consisting of several
batches in the immediate past. We derive approximation guarantees for our
algorithm under the condition that frequent episodes are approximately
well-separated from infrequent ones in every batch of the window. We present
extensive experimental evaluations of our algorithm on both real and synthetic
data. We also present comparisons with baselines and adaptations of streaming
algorithms from itemset mining literature. | computer science |
40,652 | Visual and semantic interpretability of projections of high dimensional
data for classification tasks | cs.HC | A number of visual quality measures have been introduced in visual analytics
literature in order to automatically select the best views of high dimensional
data from a large number of candidate data projections. These methods generally
concentrate on the interpretability of the visualization and pay little
attention to the interpretability of the projection axes. In this paper, we
argue that interpretability of the visualizations and the feature
transformation functions are both crucial for visual exploration of high
dimensional labeled data. We present a two-part user study to examine these two
related but orthogonal aspects of interpretability. We first study how humans
judge the quality of 2D scatterplots of various datasets with varying number of
classes and provide comparisons with ten automated measures, including a number
of visual quality measures and related measures from various machine learning
fields. We then investigate how the user perception on interpretability of
mathematical expressions relate to various automated measures of complexity
that can be used to characterize data projection functions. We conclude with a
discussion of how automated measures of visual and semantic interpretability of
data projections can be used together for exploratory analysis in
classification tasks. | computer science |
40,653 | Clustering is difficult only when it does not matter | cs.LG | Numerous papers ask how difficult it is to cluster data. We suggest that the
more relevant and interesting question is how difficult it is to cluster data
sets {\em that can be clustered well}. More generally, despite the ubiquity and
the great importance of clustering, we still do not have a satisfactory
mathematical theory of clustering. In order to properly understand clustering,
it is clearly necessary to develop a solid theoretical basis for the area. For
example, from the perspective of computational complexity theory the clustering
problem seems very hard. Numerous papers introduce various criteria and
numerical measures to quantify the quality of a given clustering. The resulting
conclusions are pessimistic, since it is computationally difficult to find an
optimal clustering of a given data set, if we go by any of these popular
criteria. In contrast, the practitioners' perspective is much more optimistic.
Our explanation for this disparity of opinions is that complexity theory
concentrates on the worst case, whereas in reality we only care for data sets
that can be clustered well.
We introduce a theoretical framework of clustering in metric spaces that
revolves around a notion of "good clustering". We show that if a good
clustering exists, then in many cases it can be efficiently found. Our
conclusion is that contrary to popular belief, clustering should not be
considered a hard task. | computer science |
40,654 | On the practically interesting instances of MAXCUT | cs.CC | The complexity of a computational problem is traditionally quantified based
on the hardness of its worst case. This approach has many advantages and has
led to a deep and beautiful theory. However, from the practical perspective,
this leaves much to be desired. In application areas, practically interesting
instances very often occupy just a tiny part of an algorithm's space of
instances, and the vast majority of instances are simply irrelevant. Addressing
these issues is a major challenge for theoretical computer science which may
make theory more relevant to the practice of computer science.
Following Bilu and Linial, we apply this perspective to MAXCUT, viewed as a
clustering problem. Using a variety of techniques, we investigate practically
interesting instances of this problem. Specifically, we show how to solve in
polynomial time distinguished, metric, expanding and dense instances of MAXCUT
under mild stability assumptions. In particular, $(1+\epsilon)$-stability
(which is optimal) suffices for metric and dense MAXCUT. We also show how to
solve in polynomial time $\Omega(\sqrt{n})$-stable instances of MAXCUT,
substantially improving the best previously known result. | computer science |
40,655 | A hybrid clustering algorithm for data mining | cs.DB | Data clustering is a process of arranging similar data into groups. A
clustering algorithm partitions a data set into several groups such that the
similarity within a group is better than among groups. In this paper a hybrid
clustering algorithm based on K-mean and K-harmonic mean (KHM) is described.
The proposed algorithm is tested on five different datasets. The research is
focused on fast and accurate clustering. Its performance is compared with the
traditional K-means & KHM algorithm. The result obtained from proposed hybrid
algorithm is much better than the traditional K-mean & KHM algorithm. | computer science |
40,656 | Algorithms for Approximate Minimization of the Difference Between
Submodular Functions, with Applications | cs.DS | We extend the work of Narasimhan and Bilmes [30] for minimizing set functions
representable as a difference between submodular functions. Similar to [30],
our new algorithms are guaranteed to monotonically reduce the objective
function at every step. We empirically and theoretically show that the
per-iteration cost of our algorithms is much less than [30], and our algorithms
can be used to efficiently minimize a difference between submodular functions
under various combinatorial constraints, a problem not previously addressed. We
provide computational bounds and a hardness result on the mul- tiplicative
inapproximability of minimizing the difference between submodular functions. We
show, however, that it is possible to give worst-case additive bounds by
providing a polynomial time computable lower-bound on the minima. Finally we
show how a number of machine learning problems can be modeled as minimizing the
difference between submodular functions. We experimentally show the validity of
our algorithms by testing them on the problem of feature selection with
submodular cost features. | computer science |
40,657 | Robust Online Hamiltonian Learning | cs.LG | In this work we combine two distinct machine learning methodologies,
sequential Monte Carlo and Bayesian experimental design, and apply them to the
problem of inferring the dynamical parameters of a quantum system. We design
the algorithm with practicality in mind by including parameters that control
trade-offs between the requirements on computational and experimental
resources. The algorithm can be implemented online (during experimental data
collection), avoiding the need for storage and post-processing. Most
importantly, our algorithm is capable of learning Hamiltonian parameters even
when the parameters change from experiment-to-experiment, and also when
additional noise processes are present and unknown. The algorithm also
numerically estimates the Cramer-Rao lower bound, certifying its own
performance. | computer science |
40,658 | From Fields to Trees | stat.CO | We present new MCMC algorithms for computing the posterior distributions and
expectations of the unknown variables in undirected graphical models with
regular structure. For demonstration purposes, we focus on Markov Random Fields
(MRFs). By partitioning the MRFs into non-overlapping trees, it is possible to
compute the posterior distribution of a particular tree exactly by conditioning
on the remaining tree. These exact solutions allow us to construct efficient
blocked and Rao-Blackwellised MCMC algorithms. We show empirically that tree
sampling is considerably more efficient than other partitioned sampling schemes
and the naive Gibbs sampler, even in cases where loopy belief propagation fails
to converge. We prove that tree sampling exhibits lower variance than the naive
Gibbs sampler and other naive partitioning schemes using the theoretical
measure of maximal correlation. We also construct new information theory tools
for comparing different MCMC schemes and show that, under these, tree sampling
is more efficient. | computer science |
40,659 | Maximum Entropy for Collaborative Filtering | cs.IR | Within the task of collaborative filtering two challenges for computing
conditional probabilities exist. First, the amount of training data available
is typically sparse with respect to the size of the domain. Thus, support for
higher-order interactions is generally not present. Second, the variables that
we are conditioning upon vary for each query. That is, users label different
variables during each query. For this reason, there is no consistent input to
output mapping. To address these problems we purpose a maximum entropy approach
using a non-standard measure of entropy. This approach can be simplified to
solving a set of linear equations that can be efficiently solved. | computer science |
40,660 | Learning Probabilistic Systems from Tree Samples | cs.LO | We consider the problem of learning a non-deterministic probabilistic system
consistent with a given finite set of positive and negative tree samples.
Consistency is defined with respect to strong simulation conformance. We
propose learning algorithms that use traditional and a new "stochastic"
state-space partitioning, the latter resulting in the minimum number of states.
We then use them to solve the problem of "active learning", that uses a
knowledgeable teacher to generate samples as counterexamples to simulation
equivalence queries. We show that the problem is undecidable in general, but
that it becomes decidable under a suitable condition on the teacher which comes
naturally from the way samples are generated from failed simulation checks. The
latter problem is shown to be undecidable if we impose an additional condition
on the learner to always conjecture a "minimum state" hypothesis. We therefore
propose a semi-algorithm using stochastic partitions. Finally, we apply the
proposed (semi-) algorithms to infer intermediate assumptions in an automated
assume-guarantee verification framework for probabilistic systems. | computer science |
40,661 | Touchalytics: On the Applicability of Touchscreen Input as a Behavioral
Biometric for Continuous Authentication | cs.CR | We investigate whether a classifier can continuously authenticate users based
on the way they interact with the touchscreen of a smart phone. We propose a
set of 30 behavioral touch features that can be extracted from raw touchscreen
logs and demonstrate that different users populate distinct subspaces of this
feature space. In a systematic experiment designed to test how this behavioral
pattern exhibits consistency over time, we collected touch data from users
interacting with a smart phone using basic navigation maneuvers, i.e., up-down
and left-right scrolling. We propose a classification framework that learns the
touch behavior of a user during an enrollment phase and is able to accept or
reject the current user by monitoring interaction with the touch screen. The
classifier achieves a median equal error rate of 0% for intra-session
authentication, 2%-3% for inter-session authentication and below 4% when the
authentication test was carried out one week after the enrollment phase. While
our experimental findings disqualify this method as a standalone authentication
mechanism for long-term authentication, it could be implemented as a means to
extend screen-lock time or as a part of a multi-modal biometric authentication
system. | computer science |
40,662 | Gaussian process regression as a predictive model for Quality-of-Service
in Web service systems | cs.NI | In this paper, we present the Gaussian process regression as the predictive
model for Quality-of-Service (QoS) attributes in Web service systems. The goal
is to predict performance of the execution system expressed as QoS attributes
given existing execution system, service repository, and inputs, e.g., streams
of requests. In order to evaluate the performance of Gaussian process
regression the simulation environment was developed. Two quality indexes were
used, namely, Mean Absolute Error and Mean Squared Error. The results obtained
within the experiment show that the Gaussian process performed the best with
linear kernel and statistically significantly better comparing to
Classification and Regression Trees (CART) method. | computer science |
40,663 | Predicate Generation for Learning-Based Quantifier-Free Loop Invariant
Inference | cs.LO | We address the predicate generation problem in the context of loop invariant
inference. Motivated by the interpolation-based abstraction refinement
technique, we apply the interpolation theorem to synthesize predicates
implicitly implied by program texts. Our technique is able to improve the
effectiveness and efficiency of the learning-based loop invariant inference
algorithm in [14]. We report experiment results of examples from Linux,
SPEC2000, and Tar utility. | computer science |
40,664 | Label-dependent Feature Extraction in Social Networks for Node
Classification | cs.SI | A new method of feature extraction in the social network for within-network
classification is proposed in the paper. The method provides new features
calculated by combination of both: network structure information and class
labels assigned to nodes. The influence of various features on classification
performance has also been studied. The experiments on real-world data have
shown that features created owing to the proposed method can lead to
significant improvement of classification accuracy. | computer science |
40,665 | Discovery of factors in matrices with grades | cs.LG | We present an approach to decomposition and factor analysis of matrices with
ordinal data. The matrix entries are grades to which objects represented by
rows satisfy attributes represented by columns, e.g. grades to which an image
is red, a product has a given feature, or a person performs well in a test. We
assume that the grades form a bounded scale equipped with certain aggregation
operators and conforms to the structure of a complete residuated lattice. We
present a greedy approximation algorithm for the problem of decomposition of
such matrix in a product of two matrices with grades under the restriction that
the number of factors be small. Our algorithm is based on a geometric insight
provided by a theorem identifying particular rectangular-shaped submatrices as
optimal factors for the decompositions. These factors correspond to formal
concepts of the input data and allow an easy interpretation of the
decomposition. We present illustrative examples and experimental evaluation. | computer science |
40,666 | Mining Representative Unsubstituted Graph Patterns Using Prior
Similarity Matrix | cs.CE | One of the most powerful techniques to study protein structures is to look
for recurrent fragments (also called substructures or spatial motifs), then use
them as patterns to characterize the proteins under study. An emergent trend
consists in parsing proteins three-dimensional (3D) structures into graphs of
amino acids. Hence, the search of recurrent spatial motifs is formulated as a
process of frequent subgraph discovery where each subgraph represents a spatial
motif. In this scope, several efficient approaches for frequent subgraph
discovery have been proposed in the literature. However, the set of discovered
frequent subgraphs is too large to be efficiently analyzed and explored in any
further process. In this paper, we propose a novel pattern selection approach
that shrinks the large number of discovered frequent subgraphs by selecting the
representative ones. Existing pattern selection approaches do not exploit the
domain knowledge. Yet, in our approach we incorporate the evolutionary
information of amino acids defined in the substitution matrices in order to
select the representative subgraphs. We show the effectiveness of our approach
on a number of real datasets. The results issued from our experiments show that
our approach is able to considerably decrease the number of motifs while
enhancing their interestingness. | computer science |
40,667 | Mini-Batch Primal and Dual Methods for SVMs | cs.LG | We address the issue of using mini-batches in stochastic optimization of
SVMs. We show that the same quantity, the spectral norm of the data, controls
the parallelization speedup obtained for both primal stochastic subgradient
descent (SGD) and stochastic dual coordinate ascent (SCDA) methods and use it
to derive novel variants of mini-batched SDCA. Our guarantees for both methods
are expressed in terms of the original nonsmooth primal problem based on the
hinge-loss. | computer science |
40,668 | Revealing Cluster Structure of Graph by Path Following Replicator
Dynamic | cs.LG | In this paper, we propose a path following replicator dynamic, and
investigate its potentials in uncovering the underlying cluster structure of a
graph. The proposed dynamic is a generalization of the discrete replicator
dynamic. The replicator dynamic has been successfully used to extract dense
clusters of graphs; however, it is often sensitive to the degree distribution
of a graph, and usually biased by vertices with large degrees, thus may fail to
detect the densest cluster. To overcome this problem, we introduce a dynamic
parameter, called path parameter, into the evolution process. The path
parameter can be interpreted as the maximal possible probability of a current
cluster containing a vertex, and it monotonically increases as evolution
process proceeds. By limiting the maximal probability, the phenomenon of some
vertices dominating the early stage of evolution process is suppressed, thus
making evolution process more robust. To solve the optimization problem with a
fixed path parameter, we propose an efficient fixed point algorithm. The time
complexity of the path following replicator dynamic is only linear in the
number of edges of a graph, thus it can analyze graphs with millions of
vertices and tens of millions of edges on a common PC in a few minutes.
Besides, it can be naturally generalized to hypergraph and graph with edges of
different orders. We apply it to four important problems: maximum clique
problem, densest k-subgraph problem, structure fitting, and discovery of
high-density regions. The extensive experimental results clearly demonstrate
its advantages, in terms of robustness, scalability and flexility. | computer science |
40,669 | Hybrid Q-Learning Applied to Ubiquitous recommender system | cs.LG | Ubiquitous information access becomes more and more important nowadays and
research is aimed at making it adapted to users. Our work consists in applying
machine learning techniques in order to bring a solution to some of the
problems concerning the acceptance of the system by users. To achieve this, we
propose a fundamental shift in terms of how we model the learning of
recommender system: inspired by models of human reasoning developed in robotic,
we combine reinforcement learning and case-base reasoning to define a
recommendation process that uses these two approaches for generating
recommendations on different context dimensions (social, temporal, geographic).
We describe an implementation of the recommender system based on this
framework. We also present preliminary results from experiments with the system
and show how our approach increases the recommendation quality. | computer science |
40,670 | Machine Learning for Bioclimatic Modelling | cs.LG | Many machine learning (ML) approaches are widely used to generate bioclimatic
models for prediction of geographic range of organism as a function of climate.
Applications such as prediction of range shift in organism, range of invasive
species influenced by climate change are important parameters in understanding
the impact of climate change. However, success of machine learning-based
approaches depends on a number of factors. While it can be safely said that no
particular ML technique can be effective in all applications and success of a
technique is predominantly dependent on the application or the type of the
problem, it is useful to understand their behavior to ensure informed choice of
techniques. This paper presents a comprehensive review of machine
learning-based bioclimatic model generation and analyses the factors
influencing success of such models. Considering the wide use of statistical
techniques, in our discussion we also include conventional statistical
techniques used in bioclimatic modelling. | computer science |
40,671 | A Cooperative Q-learning Approach for Real-time Power Allocation in
Femtocell Networks | cs.MA | In this paper, we address the problem of distributed interference management
of cognitive femtocells that share the same frequency range with macrocells
(primary user) using distributed multi-agent Q-learning. We formulate and solve
three problems representing three different Q-learning algorithms: namely,
centralized, distributed and partially distributed power control using
Q-learning (CPC-Q, DPC-Q and PDPC-Q). CPCQ, although not of practical interest,
characterizes the global optimum. Each of DPC-Q and PDPC-Q works in two
different learning paradigms: Independent (IL) and Cooperative (CL). The former
is considered the simplest form for applying Qlearning in multi-agent
scenarios, where all the femtocells learn independently. The latter is the
proposed scheme in which femtocells share partial information during the
learning process in order to strike a balance between practical relevance and
performance. In terms of performance, the simulation results showed that the CL
paradigm outperforms the IL paradigm and achieves an aggregate femtocells
capacity that is very close to the optimal one. For the practical relevance
issue, we evaluate the robustness and scalability of DPC-Q, in real time, by
deploying new femtocells in the system during the learning process, where we
showed that DPC-Q in the CL paradigm is scalable to large number of femtocells
and more robust to the network dynamics compared to the IL paradigm | computer science |
40,672 | Improving CUR Matrix Decomposition and the Nyström Approximation via
Adaptive Sampling | cs.LG | The CUR matrix decomposition and the Nystr\"{o}m approximation are two
important low-rank matrix approximation techniques. The Nystr\"{o}m method
approximates a symmetric positive semidefinite matrix in terms of a small
number of its columns, while CUR approximates an arbitrary data matrix by a
small number of its columns and rows. Thus, CUR decomposition can be regarded
as an extension of the Nystr\"{o}m approximation.
In this paper we establish a more general error bound for the adaptive
column/row sampling algorithm, based on which we propose more accurate CUR and
Nystr\"{o}m algorithms with expected relative-error bounds. The proposed CUR
and Nystr\"{o}m algorithms also have low time complexity and can avoid
maintaining the whole data matrix in RAM. In addition, we give theoretical
analysis for the lower error bounds of the standard Nystr\"{o}m method and the
ensemble Nystr\"{o}m method. The main theoretical results established in this
paper are novel, and our analysis makes no special assumption on the data
matrices. | computer science |
40,673 | On Improving Energy Efficiency within Green Femtocell Networks: A
Hierarchical Reinforcement Learning Approach | cs.LG | One of the efficient solutions of improving coverage and increasing capacity
in cellular networks is the deployment of femtocells. As the cellular networks
are becoming more complex, energy consumption of whole network infrastructure
is becoming important in terms of both operational costs and environmental
impacts. This paper investigates energy efficiency of two-tier femtocell
networks through combining game theory and stochastic learning. With the
Stackelberg game formulation, a hierarchical reinforcement learning framework
is applied for studying the joint expected utility maximization of macrocells
and femtocells subject to the minimum signal-to-interference-plus-noise-ratio
requirements. In the learning procedure, the macrocells act as leaders and the
femtocells are followers. At each time step, the leaders commit to dynamic
strategies based on the best responses of the followers, while the followers
compete against each other with no further information but the leaders'
transmission parameters. In this paper, we propose two reinforcement learning
based intelligent algorithms to schedule each cell's stochastic power levels.
Numerical experiments are presented to validate the investigations. The results
show that the two learning algorithms substantially improve the energy
efficiency of the femtocell networks. | computer science |
40,674 | Non-Asymptotic Convergence Analysis of Inexact Gradient Methods for
Machine Learning Without Strong Convexity | math.OC | Many recent applications in machine learning and data fitting call for the
algorithmic solution of structured smooth convex optimization problems.
Although the gradient descent method is a natural choice for this task, it
requires exact gradient computations and hence can be inefficient when the
problem size is large or the gradient is difficult to evaluate. Therefore,
there has been much interest in inexact gradient methods (IGMs), in which an
efficiently computable approximate gradient is used to perform the update in
each iteration. Currently, non-asymptotic linear convergence results for IGMs
are typically established under the assumption that the objective function is
strongly convex, which is not satisfied in many applications of interest; while
linear convergence results that do not require the strong convexity assumption
are usually asymptotic in nature. In this paper, we combine the best of these
two types of results and establish---under the standard assumption that the
gradient approximation errors decrease linearly to zero---the non-asymptotic
linear convergence of IGMs when applied to a class of structured convex
optimization problems. Such a class covers settings where the objective
function is not necessarily strongly convex and includes the least squares and
logistic regression problems. We believe that our techniques will find further
applications in the non-asymptotic convergence analysis of other first-order
methods. | computer science |
40,675 | API design for machine learning software: experiences from the
scikit-learn project | cs.LG | Scikit-learn is an increasingly popular machine learning li- brary. Written
in Python, it is designed to be simple and efficient, accessible to
non-experts, and reusable in various contexts. In this paper, we present and
discuss our design choices for the application programming interface (API) of
the project. In particular, we describe the simple and elegant interface shared
by all learning and processing units in the library and then discuss its
advantages in terms of composition and reusability. The paper also comments on
implementation details specific to the Python ecosystem and analyzes obstacles
faced by users and developers of the library. | computer science |
40,676 | A Comparism of the Performance of Supervised and Unsupervised Machine
Learning Techniques in evolving Awale/Mancala/Ayo Game Player | cs.LG | Awale games have become widely recognized across the world, for their
innovative strategies and techniques which were used in evolving the agents
(player) and have produced interesting results under various conditions. This
paper will compare the results of the two major machine learning techniques by
reviewing their performance when using minimax, endgame database, a combination
of both techniques or other techniques, and will determine which are the best
techniques. | computer science |
40,677 | Maximizing submodular functions using probabilistic graphical models | cs.LG | We consider the problem of maximizing submodular functions; while this
problem is known to be NP-hard, several numerically efficient local search
techniques with approximation guarantees are available. In this paper, we
propose a novel convex relaxation which is based on the relationship between
submodular functions, entropies and probabilistic graphical models. In a
graphical model, the entropy of the joint distribution decomposes as a sum of
marginal entropies of subsets of variables; moreover, for any distribution, the
entropy of the closest distribution factorizing in the graphical model provides
an bound on the entropy. For directed graphical models, this last property
turns out to be a direct consequence of the submodularity of the entropy
function, and allows the generalization of graphical-model-based upper bounds
to any submodular functions. These upper bounds may then be jointly maximized
with respect to a set, while minimized with respect to the graph, leading to a
convex variational inference scheme for maximizing submodular functions, based
on outer approximations of the marginal polytope and maximum likelihood bounded
treewidth structures. By considering graphs of increasing treewidths, we may
then explore the trade-off between computational complexity and tightness of
the relaxation. We also present extensions to constrained problems and
maximizing the difference of submodular functions, which include all possible
set functions. | computer science |
40,678 | Convex relaxations of structured matrix factorizations | cs.LG | We consider the factorization of a rectangular matrix $X $ into a positive
linear combination of rank-one factors of the form $u v^\top$, where $u$ and
$v$ belongs to certain sets $\mathcal{U}$ and $\mathcal{V}$, that may encode
specific structures regarding the factors, such as positivity or sparsity. In
this paper, we show that computing the optimal decomposition is equivalent to
computing a certain gauge function of $X$ and we provide a detailed analysis of
these gauge functions and their polars. Since these gauge functions are
typically hard to compute, we present semi-definite relaxations and several
algorithms that may recover approximate decompositions with approximation
guarantees. We illustrate our results with simulations on finding
decompositions with elements in $\{0,1\}$. As side contributions, we present a
detailed analysis of variational quadratic representations of norms as well as
a new iterative basis pursuit algorithm that can deal with inexact first-order
oracles. | computer science |
40,679 | Attribute-Efficient Evolvability of Linear Functions | cs.LG | In a seminal paper, Valiant (2006) introduced a computational model for
evolution to address the question of complexity that can arise through
Darwinian mechanisms. Valiant views evolution as a restricted form of
computational learning, where the goal is to evolve a hypothesis that is close
to the ideal function. Feldman (2008) showed that (correlational) statistical
query learning algorithms could be framed as evolutionary mechanisms in
Valiant's model. P. Valiant (2012) considered evolvability of real-valued
functions and also showed that weak-optimization algorithms that use
weak-evaluation oracles could be converted to evolutionary mechanisms.
In this work, we focus on the complexity of representations of evolutionary
mechanisms. In general, the reductions of Feldman and P. Valiant may result in
intermediate representations that are arbitrarily complex (polynomial-sized
circuits). We argue that biological constraints often dictate that the
representations have low complexity, such as constant depth and fan-in
circuits. We give mechanisms for evolving sparse linear functions under a large
class of smooth distributions. These evolutionary algorithms are
attribute-efficient in the sense that the size of the representations and the
number of generations required depend only on the sparsity of the target
function and the accuracy parameter, but have no dependence on the total number
of attributes. | computer science |
40,680 | Bayesian rules and stochastic models for high accuracy prediction of
solar radiation | cs.LG | It is essential to find solar predictive methods to massively insert
renewable energies on the electrical distribution grid. The goal of this study
is to find the best methodology allowing predicting with high accuracy the
hourly global radiation. The knowledge of this quantity is essential for the
grid manager or the private PV producer in order to anticipate fluctuations
related to clouds occurrences and to stabilize the injected PV power. In this
paper, we test both methodologies: single and hybrid predictors. In the first
class, we include the multi-layer perceptron (MLP), auto-regressive and moving
average (ARMA), and persistence models. In the second class, we mix these
predictors with Bayesian rules to obtain ad-hoc models selections, and Bayesian
averages of outputs related to single models. If MLP and ARMA are equivalent
(nRMSE close to 40.5% for the both), this hybridization allows a nRMSE gain
upper than 14 percentage points compared to the persistence estimation
(nRMSE=37% versus 51%). | computer science |
40,681 | Speedy Model Selection (SMS) for Copula Models | cs.LG | We tackle the challenge of efficiently learning the structure of expressive
multivariate real-valued densities of copula graphical models. We start by
theoretically substantiating the conjecture that for many copula families the
magnitude of Spearman's rank correlation coefficient is monotone in the
expected contribution of an edge in network, namely the negative copula
entropy. We then build on this theory and suggest a novel Bayesian approach
that makes use of a prior over values of Spearman's rho for learning
copula-based models that involve a mix of copula families. We demonstrate the
generalization effectiveness of our highly efficient approach on sizable and
varied real-life datasets. | computer science |
40,682 | Detecting Fake Escrow Websites using Rich Fraud Cues and Kernel Based
Methods | cs.CY | The ability to automatically detect fraudulent escrow websites is important
in order to alleviate online auction fraud. Despite research on related topics,
fake escrow website categorization has received little attention. In this study
we evaluated the effectiveness of various features and techniques for detecting
fake escrow websites. Our analysis included a rich set of features extracted
from web page text, image, and link information. We also proposed a composite
kernel tailored to represent the properties of fake websites, including content
duplication and structural attributes. Experiments were conducted to assess the
proposed features, techniques, and kernels on a test bed encompassing nearly
90,000 web pages derived from 410 legitimate and fake escrow sites. The
combination of an extended feature set and the composite kernel attained over
98% accuracy when differentiating fake sites from real ones, using the support
vector machines algorithm. The results suggest that automated web-based
information systems for detecting fake escrow sites could be feasible and may
be utilized as authentication mechanisms. | computer science |
40,683 | Evaluating Link-Based Techniques for Detecting Fake Pharmacy Websites | cs.CY | Fake online pharmacies have become increasingly pervasive, constituting over
90% of online pharmacy websites. There is a need for fake website detection
techniques capable of identifying fake online pharmacy websites with a high
degree of accuracy. In this study, we compared several well-known link-based
detection techniques on a large-scale test bed with the hyperlink graph
encompassing over 80 million links between 15.5 million web pages, including
1.2 million known legitimate and fake pharmacy pages. We found that the QoC and
QoL class propagation algorithms achieved an accuracy of over 90% on our
dataset. The results revealed that algorithms that incorporate dual class
propagation as well as inlink and outlink information, on page-level or
site-level graphs, are better suited for detecting fake pharmacy websites. In
addition, site-level analysis yielded significantly better results than
page-level analysis for most algorithms evaluated. | computer science |
40,684 | Optimal Hybrid Channel Allocation:Based On Machine Learning Algorithms | cs.NI | Recent advances in cellular communication systems resulted in a huge increase
in spectrum demand. To meet the requirements of the ever-growing need for
spectrum, efficient utilization of the existing resources is of utmost
importance. Channel Allocation, has thus become an inevitable research topic in
wireless communications. In this paper, we propose an optimal channel
allocation scheme, Optimal Hybrid Channel Allocation (OHCA) for an effective
allocation of channels. We improvise upon the existing Fixed Channel Allocation
(FCA) technique by imparting intelligence to the existing system by employing
the multilayer perceptron technique. | computer science |
40,685 | Context-aware recommendations from implicit data via scalable tensor
factorization | cs.LG | Albeit the implicit feedback based recommendation problem - when only the
user history is available but there are no ratings - is the most typical
setting in real-world applications, it is much less researched than the
explicit feedback case. State-of-the-art algorithms that are efficient on the
explicit case cannot be automatically transformed to the implicit case if
scalability should be maintained. There are few implicit feedback benchmark
data sets, therefore new ideas are usually experimented on explicit benchmarks.
In this paper, we propose a generic context-aware implicit feedback recommender
algorithm, coined iTALS. iTALS applies a fast, ALS-based tensor factorization
learning method that scales linearly with the number of non-zero elements in
the tensor. We also present two approximate and faster variants of iTALS using
coordinate descent and conjugate gradient methods at learning. The method also
allows us to incorporate various contextual information into the model while
maintaining its computational efficiency. We present two context-aware variants
of iTALS incorporating seasonality and item purchase sequentiality into the
model to distinguish user behavior at different time intervals, and product
types with different repetitiveness. Experiments run on six data sets shows
that iTALS clearly outperforms context-unaware models and context aware
baselines, while it is on par with factorization machines (beats 7 times out of
12 cases) both in terms of recall and MAP. | computer science |
40,686 | A Statistical Learning Based System for Fake Website Detection | cs.CY | Existing fake website detection systems are unable to effectively detect fake
websites. In this study, we advocate the development of fake website detection
systems that employ classification methods grounded in statistical learning
theory (SLT). Experimental results reveal that a prototype system developed
using SLT-based methods outperforms seven existing fake website detection
systems on a test bed encompassing 900 real and fake websites. | computer science |
40,687 | Using PCA to Efficiently Represent State Spaces | cs.LG | Reinforcement learning algorithms need to deal with the exponential growth of
states and actions when exploring optimal control in high-dimensional spaces.
This is known as the curse of dimensionality. By projecting the agent's state
onto a low-dimensional manifold, we can represent the state space in a smaller
and more efficient representation. By using this representation during
learning, the agent can converge to a good policy much faster. We test this
approach in the Mario Benchmarking Domain. When using dimensionality reduction
in Mario, learning converges much faster to a good policy. But, there is a
critical convergence-performance trade-off. By projecting onto a
low-dimensional manifold, we are ignoring important data. In this paper, we
explore this trade-off of convergence and performance. We find that learning in
as few as 4 dimensions (instead of 9), we can improve performance past learning
in the full dimensional space at a faster convergence rate. | computer science |
40,688 | fastFM: A Library for Factorization Machines | cs.LG | Factorization Machines (FM) are only used in a narrow range of applications
and are not part of the standard toolbox of machine learning models. This is a
pity, because even though FMs are recognized as being very successful for
recommender system type applications they are a general model to deal with
sparse and high dimensional features. Our Factorization Machine implementation
provides easy access to many solvers and supports regression, classification
and ranking tasks. Such an implementation simplifies the use of FM's for a wide
field of applications. This implementation has the potential to improve our
understanding of the FM model and drive new development. | computer science |
40,689 | An $O(n\log(n))$ Algorithm for Projecting Onto the Ordered Weighted
$\ell_1$ Norm Ball | math.OC | The ordered weighted $\ell_1$ (OWL) norm is a newly developed generalization
of the Octogonal Shrinkage and Clustering Algorithm for Regression (OSCAR)
norm. This norm has desirable statistical properties and can be used to perform
simultaneous clustering and regression. In this paper, we show how to compute
the projection of an $n$-dimensional vector onto the OWL norm ball in
$O(n\log(n))$ operations. In addition, we illustrate the performance of our
algorithm on a synthetic regression test. | computer science |
40,690 | Blind Compressive Sensing Framework for Collaborative Filtering | cs.IR | Existing works based on latent factor models have focused on representing the
rating matrix as a product of user and item latent factor matrices, both being
dense. Latent (factor) vectors define the degree to which a trait is possessed
by an item or the affinity of user towards that trait. A dense user matrix is a
reasonable assumption as each user will like/dislike a trait to certain extent.
However, any item will possess only a few of the attributes and never all.
Hence, the item matrix should ideally have a sparse structure rather than a
dense one as formulated in earlier works. Therefore we propose to factor the
ratings matrix into a dense user matrix and a sparse item matrix which leads us
to the Blind Compressed Sensing (BCS) framework. We derive an efficient
algorithm for solving the BCS problem based on Majorization Minimization (MM)
technique. Our proposed approach is able to achieve significantly higher
accuracy and shorter run times as compared to existing approaches. | computer science |
40,691 | Context-Aware Mobility Management in HetNets: A Reinforcement Learning
Approach | cs.NI | The use of small cell deployments in heterogeneous network (HetNet)
environments is expected to be a key feature of 4G networks and beyond, and
essential for providing higher user throughput and cell-edge coverage. However,
due to different coverage sizes of macro and pico base stations (BSs), such a
paradigm shift introduces additional requirements and challenges in dense
networks. Among these challenges is the handover performance of user equipment
(UEs), which will be impacted especially when high velocity UEs traverse
picocells. In this paper, we propose a coordination-based and context-aware
mobility management (MM) procedure for small cell networks using tools from
reinforcement learning. Here, macro and pico BSs jointly learn their long-term
traffic loads and optimal cell range expansion, and schedule their UEs based on
their velocities and historical rates (exchanged among tiers). The proposed
approach is shown to not only outperform the classical MM in terms of UE
throughput, but also to enable better fairness. In average, a gain of up to
80\% is achieved for UE throughput, while the handover failure probability is
reduced up to a factor of three by the proposed learning based MM approaches. | computer science |
40,692 | Human Social Interaction Modeling Using Temporal Deep Networks | cs.CY | We present a novel approach to computational modeling of social interactions
based on modeling of essential social interaction predicates (ESIPs) such as
joint attention and entrainment. Based on sound social psychological theory and
methodology, we collect a new "Tower Game" dataset consisting of audio-visual
capture of dyadic interactions labeled with the ESIPs. We expect this dataset
to provide a new avenue for research in computational social interaction
modeling. We propose a novel joint Discriminative Conditional Restricted
Boltzmann Machine (DCRBM) model that combines a discriminative component with
the generative power of CRBMs. Such a combination enables us to uncover
actionable constituents of the ESIPs in two steps. First, we train the DCRBM
model on the labeled data and get accurate (76\%-49\% across various ESIPs)
detection of the predicates. Second, we exploit the generative capability of
DCRBMs to activate the trained model so as to generate the lower-level data
corresponding to the specific ESIP that closely matches the actual training
data (with mean square error 0.01-0.1 for generating 100 frames). We are thus
able to decompose the ESIPs into their constituent actionable behaviors. Such a
purely computational determination of how to establish an ESIP such as
engagement is unprecedented. | computer science |
40,693 | $k$-center Clustering under Perturbation Resilience | cs.DS | The $k$-center problem is a canonical and long-studied facility location and
clustering problem with many applications in both its symmetric and asymmetric
forms. Both versions of the problem have tight approximation factors on worst
case instances: a $2$-approximation for symmetric $k$-center and an
$O(\log^*(k))$-approximation for the asymmetric version.
In this work, we go beyond the worst case and provide strong positive results
both for the asymmetric and symmetric $k$-center problems under a very natural
input stability (promise) condition called $\alpha$-perturbation resilience
(Bilu & Linial 2012) , which states that the optimal solution does not change
under any $\alpha$-factor perturbation to the input distances. We show that by
assuming 2-perturbation resilience, the exact solution for the asymmetric
$k$-center problem can be found in polynomial time. To our knowledge, this is
the first problem that is hard to approximate to any constant factor in the
worst case, yet can be optimally solved in polynomial time under perturbation
resilience for a constant value of $\alpha$. Furthermore, we prove our result
is tight by showing symmetric $k$-center under $(2-\epsilon)$-perturbation
resilience is hard unless $NP=RP$. This is the first tight result for any
problem under perturbation resilience, i.e., this is the first time the exact
value of $\alpha$ for which the problem switches from being NP-hard to
efficiently computable has been found.
Our results illustrate a surprising relationship between symmetric and
asymmetric $k$-center instances under perturbation resilience. Unlike
approximation ratio, for which symmetric $k$-center is easily solved to a
factor of $2$ but asymmetric $k$-center cannot be approximated to any constant
factor, both symmetric and asymmetric $k$-center can be solved optimally under
resilience to 2-perturbations. | computer science |
40,694 | Complexity Theoretic Limitations on Learning Halfspaces | cs.CC | We study the problem of agnostically learning halfspaces which is defined by
a fixed but unknown distribution $\mathcal{D}$ on $\mathbb{Q}^n\times \{\pm
1\}$. We define $\mathrm{Err}_{\mathrm{HALF}}(\mathcal{D})$ as the least error
of a halfspace classifier for $\mathcal{D}$. A learner who can access
$\mathcal{D}$ has to return a hypothesis whose error is small compared to
$\mathrm{Err}_{\mathrm{HALF}}(\mathcal{D})$.
Using the recently developed method of the author, Linial and Shalev-Shwartz
we prove hardness of learning results under a natural assumption on the
complexity of refuting random $K$-$\mathrm{XOR}$ formulas. We show that no
efficient learning algorithm has non-trivial worst-case performance even under
the guarantees that $\mathrm{Err}_{\mathrm{HALF}}(\mathcal{D}) \le \eta$ for
arbitrarily small constant $\eta>0$, and that $\mathcal{D}$ is supported in
$\{\pm 1\}^n\times \{\pm 1\}$. Namely, even under these favorable conditions
its error must be $\ge \frac{1}{2}-\frac{1}{n^c}$ for every $c>0$. In
particular, no efficient algorithm can achieve a constant approximation ratio.
Under a stronger version of the assumption (where $K$ can be poly-logarithmic
in $n$), we can take $\eta = 2^{-\log^{1-\nu}(n)}$ for arbitrarily small
$\nu>0$. Interestingly, this is even stronger than the best known lower bounds
(Arora et. al. 1993, Feldamn et. al. 2006, Guruswami and Raghavendra 2006) for
the case that the learner is restricted to return a halfspace classifier (i.e.
proper learning). | computer science |
40,695 | Machine Learning for Indoor Localization Using Mobile Phone-Based
Sensors | cs.LG | In this paper we investigate the problem of localizing a mobile device based
on readings from its embedded sensors utilizing machine learning methodologies.
We consider a real-world environment, collect a large dataset of 3110
datapoints, and examine the performance of a substantial number of machine
learning algorithms in localizing a mobile device. We have found algorithms
that give a mean error as accurate as 0.76 meters, outperforming other indoor
localization systems reported in the literature. We also propose a hybrid
instance-based approach that results in a speed increase by a factor of ten
with no loss of accuracy in a live deployment over standard instance-based
methods, allowing for fast and accurate localization. Further, we determine how
smaller datasets collected with less density affect accuracy of localization,
important for use in real-world environments. Finally, we demonstrate that
these approaches are appropriate for real-world deployment by evaluating their
performance in an online, in-motion experiment. | computer science |
40,696 | Times series averaging from a probabilistic interpretation of
time-elastic kernel | cs.LG | At the light of regularized dynamic time warping kernels, this paper
reconsider the concept of time elastic centroid (TEC) for a set of time series.
From this perspective, we show first how TEC can easily be addressed as a
preimage problem. Unfortunately this preimage problem is ill-posed, may suffer
from over-fitting especially for long time series and getting a sub-optimal
solution involves heavy computational costs. We then derive two new algorithms
based on a probabilistic interpretation of kernel alignment matrices that
expresses in terms of probabilistic distributions over sets of alignment paths.
The first algorithm is an iterative agglomerative heuristics inspired from the
state of the art DTW barycenter averaging (DBA) algorithm proposed specifically
for the Dynamic Time Warping measure. The second proposed algorithm achieves a
classical averaging of the aligned samples but also implements an averaging of
the time of occurrences of the aligned samples. It exploits a straightforward
progressive agglomerative heuristics. An experimentation that compares for 45
time series datasets classification error rates obtained by first near
neighbors classifiers exploiting a single medoid or centroid estimate to
represent each categories show that: i) centroids based approaches
significantly outperform medoids based approaches, ii) on the considered
experience, the two proposed algorithms outperform the state of the art DBA
algorithm, and iii) the second proposed algorithm that implements an averaging
jointly in the sample space and along the time axes emerges as the most
significantly robust time elastic averaging heuristic with an interesting noise
reduction capability. Index Terms-Time series averaging Time elastic kernel
Dynamic Time Warping Time series clustering and classification. | computer science |
40,697 | A Practical Guide to Randomized Matrix Computations with MATLAB
Implementations | cs.MS | Matrix operations such as matrix inversion, eigenvalue decomposition,
singular value decomposition are ubiquitous in real-world applications.
Unfortunately, many of these matrix operations so time and memory expensive
that they are prohibitive when the scale of data is large. In real-world
applications, since the data themselves are noisy, machine-precision matrix
operations are not necessary at all, and one can sacrifice a reasonable amount
of accuracy for computational efficiency.
In recent years, a bunch of randomized algorithms have been devised to make
matrix computations more scalable. Mahoney (2011) and Woodruff (2014) have
written excellent but very technical reviews of the randomized algorithms.
Differently, the focus of this manuscript is on intuition, algorithm
derivation, and implementation. This manuscript should be accessible to people
with knowledge in elementary matrix algebra but unfamiliar with randomized
matrix computations. The algorithms introduced in this manuscript are all
summarized in a user-friendly way, and they can be implemented in lines of
MATLAB code. The readers can easily follow the implementations even if they do
not understand the maths and algorithms. | computer science |
40,698 | SAM: Support Vector Machine Based Active Queue Management | cs.NI | Recent years have seen an increasing interest in the design of AQM (Active
Queue Management) controllers. The purpose of these controllers is to manage
the network congestion under varying loads, link delays and bandwidth. In this
paper, a new AQM controller is proposed which is trained by using the SVM
(Support Vector Machine) with the RBF (Radial Basis Function) kernal. The
proposed controller is called the support vector based AQM (SAM) controller.
The performance of the proposed controller has been compared with three
conventional AQM controllers, namely the Random Early Detection, Blue and
Proportional Plus Integral Controller. The preliminary simulation studies show
that the performance of the proposed controller is comparable to the
conventional controllers. However, the proposed controller is more efficient in
controlling the queue size than the conventional controllers. | computer science |
40,699 | Heavy hitters via cluster-preserving clustering | cs.DS | In turnstile $\ell_p$ $\varepsilon$-heavy hitters, one maintains a
high-dimensional $x\in\mathbb{R}^n$ subject to $\texttt{update}(i,\Delta)$
causing $x_i\leftarrow x_i + \Delta$, where $i\in[n]$, $\Delta\in\mathbb{R}$.
Upon receiving a query, the goal is to report a small list $L\subset[n]$, $|L|
= O(1/\varepsilon^p)$, containing every "heavy hitter" $i\in[n]$ with $|x_i|
\ge \varepsilon \|x_{\overline{1/\varepsilon^p}}\|_p$, where $x_{\overline{k}}$
denotes the vector obtained by zeroing out the largest $k$ entries of $x$ in
magnitude.
For any $p\in(0,2]$ the CountSketch solves $\ell_p$ heavy hitters using
$O(\varepsilon^{-p}\log n)$ words of space with $O(\log n)$ update time,
$O(n\log n)$ query time to output $L$, and whose output after any query is
correct with high probability (whp) $1 - 1/poly(n)$. Unfortunately the query
time is very slow. To remedy this, the work [CM05] proposed for $p=1$ in the
strict turnstile model, a whp correct algorithm achieving suboptimal space
$O(\varepsilon^{-1}\log^2 n)$, worse update time $O(\log^2 n)$, but much better
query time $O(\varepsilon^{-1}poly(\log n))$.
We show this tradeoff between space and update time versus query time is
unnecessary. We provide a new algorithm, ExpanderSketch, which in the most
general turnstile model achieves optimal $O(\varepsilon^{-p}\log n)$ space,
$O(\log n)$ update time, and fast $O(\varepsilon^{-p}poly(\log n))$ query time,
and whp correctness. Our main innovation is an efficient reduction from the
heavy hitters to a clustering problem in which each heavy hitter is encoded as
some form of noisy spectral cluster in a much bigger graph, and the goal is to
identify every cluster. Since every heavy hitter must be found, correctness
requires that every cluster be found. We then develop a "cluster-preserving
clustering" algorithm, partitioning the graph into clusters without destroying
any original cluster. | computer science |
40,700 | Lipschitz Continuity of Mahalanobis Distances and Bilinear Forms | cs.NA | Many theoretical results in the machine learning domain stand only for
functions that are Lipschitz continuous. Lipschitz continuity is a strong form
of continuity that linearly bounds the variations of a function. In this paper,
we derive tight Lipschitz constants for two families of metrics: Mahalanobis
distances and bounded-space bilinear forms. To our knowledge, this is the first
time the Mahalanobis distance is formally proved to be Lipschitz continuous and
that such tight Lipschitz constants are derived. | computer science |
40,701 | Single-Molecule Protein Identification by Sub-Nanopore Sensors | cs.LG | Recent advances in top-down mass spectrometry enabled identification of
intact proteins, but this technology still faces challenges. For example,
top-down mass spectrometry suffers from a lack of sensitivity since the ion
counts for a single fragmentation event are often low. In contrast, nanopore
technology is exquisitely sensitive to single intact molecules, but it has only
been successfully applied to DNA sequencing, so far. Here, we explore the
potential of sub-nanopores for single-molecule protein identification (SMPI)
and describe an algorithm for identification of the electrical current blockade
signal (nanospectrum) resulting from the translocation of a denaturated,
linearly charged protein through a sub-nanopore. The analysis of identification
p-values suggests that the current technology is already sufficient for
matching nanospectra against small protein databases, e.g., protein
identification in bacterial proteomes. | computer science |
40,702 | M3: Scaling Up Machine Learning via Memory Mapping | cs.LG | To process data that do not fit in RAM, conventional wisdom would suggest
using distributed approaches. However, recent research has demonstrated virtual
memory's strong potential in scaling up graph mining algorithms on a single
machine. We propose to use a similar approach for general machine learning. We
contribute: (1) our latest finding that memory mapping is also a feasible
technique for scaling up general machine learning algorithms like logistic
regression and k-means, when data fits in or exceeds RAM (we tested datasets up
to 190GB); (2) an approach, called M3, that enables existing machine learning
algorithms to work with out-of-core datasets through memory mapping, achieving
a speed that is significantly faster than a 4-instance Spark cluster, and
comparable to an 8-instance cluster. | computer science |
40,703 | Learning Simple Auctions | cs.LG | We present a general framework for proving polynomial sample complexity
bounds for the problem of learning from samples the best auction in a class of
"simple" auctions. Our framework captures all of the most prominent examples of
"simple" auctions, including anonymous and non-anonymous item and bundle
pricings, with either a single or multiple buyers. The technique we propose is
to break the analysis of auctions into two natural pieces. First, one shows
that the set of allocation rules have large amounts of structure; second,
fixing an allocation on a sample, one shows that the set of auctions agreeing
with this allocation on that sample have revenue functions with low
dimensionality. Our results effectively imply that whenever it's possible to
compute a near-optimal simple auction with a known prior, it is also possible
to compute such an auction with an unknown prior (given a polynomial number of
samples). | computer science |
40,704 | Leveraging Network Dynamics for Improved Link Prediction | cs.SI | The aim of link prediction is to forecast connections that are most likely to
occur in the future, based on examples of previously observed links. A key
insight is that it is useful to explicitly model network dynamics, how
frequently links are created or destroyed when doing link prediction. In this
paper, we introduce a new supervised link prediction framework, RPM (Rate
Prediction Model). In addition to network similarity measures, RPM uses the
predicted rate of link modifications, modeled using time series data; it is
implemented in Spark-ML and trained with the original link distribution, rather
than a small balanced subset. We compare the use of this network dynamics model
to directly creating time series of network similarity measures. Our
experiments show that RPM, which leverages predicted rates, outperforms the use
of network similarity measures, either individually or within a time series. | computer science |
40,705 | The Univariate Flagging Algorithm (UFA): a Fully-Automated Approach for
Identifying Optimal Thresholds in Data | cs.LG | In many data classification problems, there is no linear relationship between
an explanatory and the dependent variables. Instead, there may be ranges of the
input variable for which the observed outcome is signficantly more or less
likely. This paper describes an algorithm for automatic detection of such
thresholds, called the Univariate Flagging Algorithm (UFA). The algorithm
searches for a separation that optimizes the difference between separated areas
while providing the maximum support. We evaluate its performance using three
examples and demonstrate that thresholds identified by the algorithm align well
with visual inspection and subject matter expertise. We also introduce two
classification approaches that use UFA and show that the performance attained
on unseen test data is equal to or better than that of more traditional
classifiers. We demonstrate that the proposed algorithm is robust against
missing data and noise, is scalable, and is easy to interpret and visualize. It
is also well suited for problems where incidence of the target is low. | computer science |
40,706 | Typical Stability | cs.LG | In this paper, we introduce a notion of algorithmic stability called typical
stability. When our goal is to release real-valued queries (statistics)
computed over a dataset, this notion does not require the queries to be of
bounded sensitivity -- a condition that is generally assumed under differential
privacy [DMNS06, Dwork06] when used as a notion of algorithmic stability
[DFHPRR15a, DFHPRR15b, BNSSSU16] -- nor does it require the samples in the
dataset to be independent -- a condition that is usually assumed when
generalization-error guarantees are sought. Instead, typical stability requires
the output of the query, when computed on a dataset drawn from the underlying
distribution, to be concentrated around its expected value with respect to that
distribution.
We discuss the implications of typical stability on the generalization error
(i.e., the difference between the value of the query computed on the dataset
and the expected value of the query with respect to the true data
distribution). We show that typical stability can control generalization error
in adaptive data analysis even when the samples in the dataset are not
necessarily independent and when queries to be computed are not necessarily of
bounded-sensitivity as long as the results of the queries over the dataset
(i.e., the computed statistics) follow a distribution with a "light" tail.
Examples of such queries include, but not limited to, subgaussian and
subexponential queries.
We also discuss the composition guarantees of typical stability and prove
composition theorems that characterize the degradation of the parameters of
typical stability under $k$-fold adaptive composition. We also give simple
noise-addition algorithms that achieve this notion. These algorithms are
similar to their differentially private counterparts, however, the added noise
is calibrated differently. | computer science |
40,707 | An Unbiased Data Collection and Content Exploitation/Exploration
Strategy for Personalization | cs.IR | One of missions for personalization systems and recommender systems is to
show content items according to users' personal interests. In order to achieve
such goal, these systems are learning user interests over time and trying to
present content items tailoring to user profiles. Recommending items according
to users' preferences has been investigated extensively in the past few years,
mainly thanks for the popularity of Netflix competition. In a real setting,
users may be attracted by a subset of those items and interact with them, only
leaving partial feedbacks to the system to learn in the next cycle, which leads
to significant biases into systems and hence results in a situation where user
engagement metrics cannot be improved over time. The problem is not just for
one component of the system. The data collected from users is usually used in
many different tasks, including learning ranking functions, building user
profiles and constructing content classifiers. Once the data is biased, all
these downstream use cases would be impacted as well. Therefore, it would be
beneficial to gather unbiased data through user interactions. Traditionally,
unbiased data collection is done through showing items uniformly sampling from
the content pool. However, this simple scheme is not feasible as it risks user
engagement metrics and it takes long time to gather user feedbacks. In this
paper, we introduce a user-friendly unbiased data collection framework, by
utilizing methods developed in the exploitation and exploration literature. We
discuss how the framework is different from normal multi-armed bandit problems
and why such method is needed. We layout a novel Thompson sampling for
Bernoulli ranked-list to effectively balance user experiences and data
collection. The proposed method is validated from a real bucket test and we
show strong results comparing to old algorithms | computer science |
40,708 | Asynchronous Stochastic Gradient Descent with Variance Reduction for
Non-Convex Optimization | cs.LG | We provide the first theoretical analysis on the convergence rate of the
asynchronous stochastic variance reduced gradient (SVRG) descent algorithm on
non-convex optimization. Recent studies have shown that the asynchronous
stochastic gradient descent (SGD) based algorithms with variance reduction
converge with a linear convergent rate on convex problems. However, there is no
work to analyze asynchronous SGD with variance reduction technique on
non-convex problem. In this paper, we study two asynchronous parallel
implementations of SVRG: one is on a distributed memory system and the other is
on a shared memory system. We provide the theoretical analysis that both
algorithms can obtain a convergence rate of $O(1/T)$, and linear speed up is
achievable if the number of workers is upper bounded. V1,v2,v3 have been
withdrawn due to reference issue, please refer the newest version v4. | computer science |
40,709 | ModelWizard: Toward Interactive Model Construction | cs.PL | Data scientists engage in model construction to discover machine learning
models that well explain a dataset, in terms of predictiveness,
understandability and generalization across domains. Questions such as "what if
we model common cause Z" and "what if Y's dependence on X reverses" inspire
many candidate models to consider and compare, yet current tools emphasize
constructing a final model all at once.
To more naturally reflect exploration when debating numerous models, we
propose an interactive model construction framework grounded in composable
operations. Primitive operations capture core steps refining data and model
that, when verified, form an inductive basis to prove model validity. Derived,
composite operations enable advanced model families, both generic and
specialized, abstracted away from low-level details.
We prototype our envisioned framework in ModelWizard, a domain-specific
language embedded in F# to construct Tabular models. We enumerate language
design and demonstrate its use through several applications, emphasizing how
language may facilitate creation of complex models. To future engineers
designing data science languages and tools, we offer ModelWizard's design as a
new model construction paradigm, speeding discovery of our universe's
structure. | computer science |
Subsets and Splits