title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Learning from compressed observations | cs.IT cs.LG math.IT | The problem of statistical learning is to construct a predictor of a random
variable $Y$ as a function of a related random variable $X$ on the basis of an
i.i.d. training sample from the joint distribution of $(X,Y)$. Allowable
predictors are drawn from some specified class, and the goal is to approach
asymptotically the performance (expected loss) of the best predictor in the
class. We consider the setting in which one has perfect observation of the
$X$-part of the sample, while the $Y$-part has to be communicated at some
finite bit rate. The encoding of the $Y$-values is allowed to depend on the
$X$-values. Under suitable regularity conditions on the admissible predictors,
the underlying family of probability distributions and the loss function, we
give an information-theoretic characterization of achievable predictor
performance in terms of conditional distortion-rate functions. The ideas are
illustrated on the example of nonparametric regression in Gaussian noise.
| Maxim Raginsky | 10.1109/ITW.2007.4313111 | 0704.0671 | null | null |
Sensor Networks with Random Links: Topology Design for Distributed
Consensus | cs.IT cs.LG math.IT | In a sensor network, in practice, the communication among sensors is subject
to:(1) errors or failures at random times; (3) costs; and(2) constraints since
sensors and networks operate under scarce resources, such as power, data rate,
or communication. The signal-to-noise ratio (SNR) is usually a main factor in
determining the probability of error (or of communication failure) in a link.
These probabilities are then a proxy for the SNR under which the links operate.
The paper studies the problem of designing the topology, i.e., assigning the
probabilities of reliable communication among sensors (or of link failures) to
maximize the rate of convergence of average consensus, when the link
communication costs are taken into account, and there is an overall
communication budget constraint. To consider this problem, we address a number
of preliminary issues: (1) model the network as a random topology; (2)
establish necessary and sufficient conditions for mean square sense (mss) and
almost sure (a.s.) convergence of average consensus when network links fail;
and, in particular, (3) show that a necessary and sufficient condition for both
mss and a.s. convergence is for the algebraic connectivity of the mean graph
describing the network topology to be strictly positive. With these results, we
formulate topology design, subject to random link failures and to a
communication cost constraint, as a constrained convex optimization problem to
which we apply semidefinite programming techniques. We show by an extensive
numerical study that the optimal design improves significantly the convergence
speed of the consensus algorithm and can achieve the asymptotic performance of
a non-random network at a fraction of the communication cost.
| Soummya Kar and Jose M. F. Moura | 10.1109/TSP.2008.920143 | 0704.0954 | null | null |
The on-line shortest path problem under partial monitoring | cs.LG cs.SC | The on-line shortest path problem is considered under various models of
partial monitoring. Given a weighted directed acyclic graph whose edge weights
can change in an arbitrary (adversarial) way, a decision maker has to choose in
each round of a game a path between two distinguished vertices such that the
loss of the chosen path (defined as the sum of the weights of its composing
edges) be as small as possible. In a setting generalizing the multi-armed
bandit problem, after choosing a path, the decision maker learns only the
weights of those edges that belong to the chosen path. For this problem, an
algorithm is given whose average cumulative loss in n rounds exceeds that of
the best path, matched off-line to the entire sequence of the edge weights, by
a quantity that is proportional to 1/\sqrt{n} and depends only polynomially on
the number of edges of the graph. The algorithm can be implemented with linear
complexity in the number of rounds n and in the number of edges. An extension
to the so-called label efficient setting is also given, in which the decision
maker is informed about the weights of the edges corresponding to the chosen
path at a total of m << n time instances. Another extension is shown where the
decision maker competes against a time-varying path, a generalization of the
problem of tracking the best expert. A version of the multi-armed bandit
setting for shortest path is also discussed where the decision maker learns
only the total weight of the chosen path but not the weights of the individual
edges on the path. Applications to routing in packet switched networks along
with simulation results are also presented.
| Andras Gyorgy, Tamas Linder, Gabor Lugosi, Gyorgy Ottucsak | null | 0704.1020 | null | null |
A neural network approach to ordinal regression | cs.LG cs.AI cs.NE | Ordinal regression is an important type of learning, which has properties of
both classification and regression. Here we describe a simple and effective
approach to adapt a traditional neural network to learn ordinal categories. Our
approach is a generalization of the perceptron method for ordinal regression.
On several benchmark datasets, our method (NNRank) outperforms a neural network
classification method. Compared with the ordinal regression methods using
Gaussian processes and support vector machines, NNRank achieves comparable
performance. Moreover, NNRank has the advantages of traditional neural
networks: learning in both online and batch modes, handling very large training
datasets, and making rapid predictions. These features make NNRank a useful and
complementary tool for large-scale data processing tasks such as information
retrieval, web page ranking, collaborative filtering, and protein ranking in
Bioinformatics.
| Jianlin Cheng | null | 0704.1028 | null | null |
Parametric Learning and Monte Carlo Optimization | cs.LG | This paper uncovers and explores the close relationship between Monte Carlo
Optimization of a parametrized integral (MCO), Parametric machine-Learning
(PL), and `blackbox' or `oracle'-based optimization (BO). We make four
contributions. First, we prove that MCO is mathematically identical to a broad
class of PL problems. This identity potentially provides a new application
domain for all broadly applicable PL techniques: MCO. Second, we introduce
immediate sampling, a new version of the Probability Collectives (PC) algorithm
for blackbox optimization. Immediate sampling transforms the original BO
problem into an MCO problem. Accordingly, by combining these first two
contributions, we can apply all PL techniques to BO. In our third contribution
we validate this way of improving BO by demonstrating that cross-validation and
bagging improve immediate sampling. Finally, conventional MC and MCO procedures
ignore the relationship between the sample point locations and the associated
values of the integrand; only the values of the integrand at those locations
are considered. We demonstrate that one can exploit the sample location
information using PL techniques, for example by forming a fit of the sample
locations to the associated values of the integrand. This provides an
additional way to apply PL techniques to improve MCO.
| David H. Wolpert and Dev G. Rajnarayan | null | 0704.1274 | null | null |
Preconditioned Temporal Difference Learning | cs.LG cs.AI | This paper has been withdrawn by the author. This draft is withdrawn for its
poor quality in english, unfortunately produced by the author when he was just
starting his science route. Look at the ICML version instead:
http://icml2008.cs.helsinki.fi/papers/111.pdf
| Yao HengShuai | null | 0704.1409 | null | null |
A Note on the Inapproximability of Correlation Clustering | cs.LG cs.DS | We consider inapproximability of the correlation clustering problem defined
as follows: Given a graph $G = (V,E)$ where each edge is labeled either "+"
(similar) or "-" (dissimilar), correlation clustering seeks to partition the
vertices into clusters so that the number of pairs correctly (resp.
incorrectly) classified with respect to the labels is maximized (resp.
minimized). The two complementary problems are called MaxAgree and MinDisagree,
respectively, and have been studied on complete graphs, where every edge is
labeled, and general graphs, where some edge might not have been labeled.
Natural edge-weighted versions of both problems have been studied as well. Let
S-MaxAgree denote the weighted problem where all weights are taken from set S,
we show that S-MaxAgree with weights bounded by $O(|V|^{1/2-\delta})$
essentially belongs to the same hardness class in the following sense: if there
is a polynomial time algorithm that approximates S-MaxAgree within a factor of
$\lambda = O(\log{|V|})$ with high probability, then for any choice of S',
S'-MaxAgree can be approximated in polynomial time within a factor of $(\lambda
+ \epsilon)$, where $\epsilon > 0$ can be arbitrarily small, with high
probability. A similar statement also holds for $S-MinDisagree. This result
implies it is hard (assuming $NP \neq RP$) to approximate unweighted MaxAgree
within a factor of $80/79-\epsilon$, improving upon a previous known factor of
$116/115-\epsilon$ by Charikar et. al. \cite{Chari05}.
| Jinsong Tan | null | 0704.2092 | null | null |
Joint universal lossy coding and identification of stationary mixing
sources | cs.IT cs.LG math.IT | The problem of joint universal source coding and modeling, treated in the
context of lossless codes by Rissanen, was recently generalized to fixed-rate
lossy coding of finitely parametrized continuous-alphabet i.i.d. sources. We
extend these results to variable-rate lossy block coding of stationary ergodic
sources and show that, for bounded metric distortion measures, any finitely
parametrized family of stationary sources satisfying suitable mixing,
smoothness and Vapnik-Chervonenkis learnability conditions admits universal
schemes for joint lossy source coding and identification. We also give several
explicit examples of parametric sources satisfying the regularity conditions.
| Maxim Raginsky | null | 0704.2644 | null | null |
Supervised Feature Selection via Dependence Estimation | cs.LG | We introduce a framework for filtering features that employs the
Hilbert-Schmidt Independence Criterion (HSIC) as a measure of dependence
between the features and the labels. The key idea is that good features should
maximise such dependence. Feature selection for various supervised learning
problems (including classification and regression) is unified under this
framework, and the solutions can be approximated using a backward-elimination
algorithm. We demonstrate the usefulness of our method on both artificial and
real world datasets.
| Le Song, Alex Smola, Arthur Gretton, Karsten Borgwardt, Justin Bedo | null | 0704.2668 | null | null |
Equivalence of LP Relaxation and Max-Product for Weighted Matching in
General Graphs | cs.IT cs.AI cs.LG cs.NI math.IT | Max-product belief propagation is a local, iterative algorithm to find the
mode/MAP estimate of a probability distribution. While it has been successfully
employed in a wide variety of applications, there are relatively few
theoretical guarantees of convergence and correctness for general loopy graphs
that may have many short cycles. Of these, even fewer provide exact ``necessary
and sufficient'' characterizations.
In this paper we investigate the problem of using max-product to find the
maximum weight matching in an arbitrary graph with edge weights. This is done
by first constructing a probability distribution whose mode corresponds to the
optimal matching, and then running max-product. Weighted matching can also be
posed as an integer program, for which there is an LP relaxation. This
relaxation is not always tight. In this paper we show that \begin{enumerate}
\item If the LP relaxation is tight, then max-product always converges, and
that too to the correct answer. \item If the LP relaxation is loose, then
max-product does not converge. \end{enumerate} This provides an exact,
data-dependent characterization of max-product performance, and a precise
connection to LP relaxation, which is a well-studied optimization technique.
Also, since LP relaxation is known to be tight for bipartite graphs, our
results generalize other recent results on using max-product to find weighted
matchings in bipartite graphs.
| Sujay Sanghavi | null | 0705.0760 | null | null |
HMM Speaker Identification Using Linear and Non-linear Merging
Techniques | cs.LG | Speaker identification is a powerful, non-invasive and in-expensive biometric
technique. The recognition accuracy, however, deteriorates when noise levels
affect a specific band of frequency. In this paper, we present a sub-band based
speaker identification that intends to improve the live testing performance.
Each frequency sub-band is processed and classified independently. We also
compare the linear and non-linear merging techniques for the sub-bands
recognizer. Support vector machines and Gaussian Mixture models are the
non-linear merging techniques that are investigated. Results showed that the
sub-band based method used with linear merging techniques enormously improved
the performance of the speaker identification over the performance of wide-band
recognizers when tested live. A live testing improvement of 9.78% was achieved
| Unathi Mahola, Fulufhelo V. Nelwamondo, Tshilidzi Marwala | null | 0705.1585 | null | null |
Statistical Mechanics of Nonlinear On-line Learning for Ensemble
Teachers | cs.LG cond-mat.dis-nn | We analyze the generalization performance of a student in a model composed of
nonlinear perceptrons: a true teacher, ensemble teachers, and the student. We
calculate the generalization error of the student analytically or numerically
using statistical mechanics in the framework of on-line learning. We treat two
well-known learning rules: Hebbian learning and perceptron learning. As a
result, it is proven that the nonlinear model shows qualitatively different
behaviors from the linear model. Moreover, it is clarified that Hebbian
learning and perceptron learning show qualitatively different behaviors from
each other. In Hebbian learning, we can analytically obtain the solutions. In
this case, the generalization error monotonically decreases. The steady value
of the generalization error is independent of the learning rate. The larger the
number of teachers is and the more variety the ensemble teachers have, the
smaller the generalization error is. In perceptron learning, we have to
numerically obtain the solutions. In this case, the dynamical behaviors of the
generalization error are non-monotonic. The smaller the learning rate is, the
larger the number of teachers is; and the more variety the ensemble teachers
have, the smaller the minimum value of the generalization error is.
| Hideto Utsumi, Seiji Miyoshi, Masato Okada | 10.1143/JPSJ.76.114001 | 0705.2318 | null | null |
On the monotonization of the training set | cs.LG cs.AI | We consider the problem of minimal correction of the training set to make it
consistent with monotonic constraints. This problem arises during analysis of
data sets via techniques that require monotone data. We show that this problem
is NP-hard in general and is equivalent to finding a maximal independent set in
special orgraphs. Practically important cases of that problem considered in
detail. These are the cases when a partial order given on the replies set is a
total order or has a dimension 2. We show that the second case can be reduced
to maximization of a quadratic convex function on a convex set. For this case
we construct an approximate polynomial algorithm based on convex optimization.
| Rustem Takhanov | null | 0705.2765 | null | null |
Mixed membership stochastic blockmodels | stat.ME cs.LG math.ST physics.soc-ph stat.ML stat.TH | Observations consisting of measurements on relationships for pairs of objects
arise in many settings, such as protein interaction and gene regulatory
networks, collections of author-recipient email, and social networks. Analyzing
such data with probabilisic models can be delicate because the simple
exchangeability assumptions underlying many boilerplate models no longer hold.
In this paper, we describe a latent variable model of such data called the
mixed membership stochastic blockmodel. This model extends blockmodels for
relational data to ones which capture mixed membership latent relational
structure, thus providing an object-specific low-dimensional representation. We
develop a general variational inference algorithm for fast approximate
posterior inference. We explore applications to social and protein interaction
networks.
| Edoardo M Airoldi, David M Blei, Stephen E Fienberg, Eric P Xing | null | 0705.4485 | null | null |
Loop corrections for message passing algorithms in continuous variable
models | cs.AI cs.LG | In this paper we derive the equations for Loop Corrected Belief Propagation
on a continuous variable Gaussian model. Using the exactness of the averages
for belief propagation for Gaussian models, a different way of obtaining the
covariances is found, based on Belief Propagation on cavity graphs. We discuss
the relation of this loop correction algorithm to Expectation Propagation
algorithms for the case in which the model is no longer Gaussian, but slightly
perturbed by nonlinear terms.
| Bastian Wemmenhove and Bert Kappen | null | 0705.4566 | null | null |
A Novel Model of Working Set Selection for SMO Decomposition Methods | cs.LG cs.AI | In the process of training Support Vector Machines (SVMs) by decomposition
methods, working set selection is an important technique, and some exciting
schemes were employed into this field. To improve working set selection, we
propose a new model for working set selection in sequential minimal
optimization (SMO) decomposition methods. In this model, it selects B as
working set without reselection. Some properties are given by simple proof, and
experiments demonstrate that the proposed method is in general faster than
existing methods.
| Zhendong Zhao, Lei Yuan, Yuxuan Wang, Forrest Sheng Bao, Shunyi Zhang
Yanfei Sun | 10.1109/ICTAI.2007.99 | 0706.0585 | null | null |
Getting started in probabilistic graphical models | q-bio.QM cs.LG physics.soc-ph stat.ME stat.ML | Probabilistic graphical models (PGMs) have become a popular tool for
computational analysis of biological data in a variety of domains. But, what
exactly are they and how do they work? How can we use PGMs to discover patterns
that are biologically relevant? And to what extent can PGMs help us formulate
new hypotheses that are testable at the bench? This note sketches out some
answers and illustrates the main ideas behind the statistical approach to
biological pattern discovery.
| Edoardo M Airoldi | 10.1371/journal.pcbi.0030252 | 0706.2040 | null | null |
A tutorial on conformal prediction | cs.LG stat.ML | Conformal prediction uses past experience to determine precise levels of
confidence in new predictions. Given an error probability $\epsilon$, together
with a method that makes a prediction $\hat{y}$ of a label $y$, it produces a
set of labels, typically containing $\hat{y}$, that also contains $y$ with
probability $1-\epsilon$. Conformal prediction can be applied to any method for
producing $\hat{y}$: a nearest-neighbor method, a support-vector machine, ridge
regression, etc.
Conformal prediction is designed for an on-line setting in which labels are
predicted successively, each one being revealed before the next is predicted.
The most novel and valuable feature of conformal prediction is that if the
successive examples are sampled independently from the same distribution, then
the successive predictions will be right $1-\epsilon$ of the time, even though
they are based on an accumulating dataset rather than on independent datasets.
In addition to the model under which successive examples are sampled
independently, other on-line compression models can also use conformal
prediction. The widely used Gaussian linear model is one of these.
This tutorial presents a self-contained account of the theory of conformal
prediction and works through several numerical examples. A more comprehensive
treatment of the topic is provided in "Algorithmic Learning in a Random World",
by Vladimir Vovk, Alex Gammerman, and Glenn Shafer (Springer, 2005).
| Glenn Shafer and Vladimir Vovk | null | 0706.3188 | null | null |
The Role of Time in the Creation of Knowledge | cs.LG cs.AI cs.IT math.IT | This paper I assume that in humans the creation of knowledge depends on a
discrete time, or stage, sequential decision-making process subjected to a
stochastic, information transmitting environment. For each time-stage, this
environment randomly transmits Shannon type information-packets to the
decision-maker, who examines each of them for relevancy and then determines his
optimal choices. Using this set of relevant information-packets, the
decision-maker adapts, over time, to the stochastic nature of his environment,
and optimizes the subjective expected rate-of-growth of knowledge. The
decision-maker's optimal actions, lead to a decision function that involves,
over time, his view of the subjective entropy of the environmental process and
other important parameters at each time-stage of the process. Using this model
of human behavior, one could create psychometric experiments using computer
simulation and real decision-makers, to play programmed games to measure the
resulting human performance.
| Roy E. Murphy | null | 0707.0498 | null | null |
Clustering and Feature Selection using Sparse Principal Component
Analysis | cs.AI cs.LG cs.MS | In this paper, we study the application of sparse principal component
analysis (PCA) to clustering and feature selection problems. Sparse PCA seeks
sparse factors, or linear combinations of the data variables, explaining a
maximum amount of variance in the data while having only a limited number of
nonzero coefficients. PCA is often used as a simple clustering technique and
sparse factors allow us here to interpret the clusters in terms of a reduced
set of variables. We begin with a brief introduction and motivation on sparse
PCA and detail our implementation of the algorithm in d'Aspremont et al.
(2005). We then apply these results to some classic clustering and feature
selection problems arising in biology.
| Ronny Luss, Alexandre d'Aspremont | null | 0707.0701 | null | null |
Model Selection Through Sparse Maximum Likelihood Estimation | cs.AI cs.LG | We consider the problem of estimating the parameters of a Gaussian or binary
distribution in such a way that the resulting undirected graphical model is
sparse. Our approach is to solve a maximum likelihood problem with an added
l_1-norm penalty term. The problem as formulated is convex but the memory
requirements and complexity of existing interior point methods are prohibitive
for problems with more than tens of nodes. We present two new algorithms for
solving problems with at least a thousand nodes in the Gaussian case. Our first
algorithm uses block coordinate descent, and can be interpreted as recursive
l_1-norm penalized regression. Our second algorithm, based on Nesterov's first
order method, yields a complexity estimate with a better dependence on problem
size than existing interior point methods. Using a log determinant relaxation
of the log partition function (Wainwright & Jordan (2006)), we show that these
same algorithms can be used to solve an approximate sparse maximum likelihood
problem for the binary case. We test our algorithms on synthetic data, as well
as on gene expression and senate voting records data.
| Onureena Banerjee, Laurent El Ghaoui, Alexandre d'Aspremont | null | 0707.0704 | null | null |
Optimal Solutions for Sparse Principal Component Analysis | cs.AI cs.LG | Given a sample covariance matrix, we examine the problem of maximizing the
variance explained by a linear combination of the input variables while
constraining the number of nonzero coefficients in this combination. This is
known as sparse principal component analysis and has a wide array of
applications in machine learning and engineering. We formulate a new
semidefinite relaxation to this problem and derive a greedy algorithm that
computes a full set of good solutions for all target numbers of non zero
coefficients, with total complexity O(n^3), where n is the number of variables.
We then use the same relaxation to derive sufficient conditions for global
optimality of a solution, which can be tested in O(n^3) per pattern. We discuss
applications in subset selection and sparse recovery and show on artificial
examples and biological data that our algorithm does provide globally optimal
solutions in many cases.
| Alexandre d'Aspremont, Francis Bach, Laurent El Ghaoui | null | 0707.0705 | null | null |
A New Generalization of Chebyshev Inequality for Random Vectors | math.ST cs.LG math.PR stat.AP stat.TH | In this article, we derive a new generalization of Chebyshev inequality for
random vectors. We demonstrate that the new generalization is much less
conservative than the classical generalization.
| Xinjia Chen | null | 0707.0805 | null | null |
Clusters, Graphs, and Networks for Analysing Internet-Web-Supported
Communication within a Virtual Community | cs.AI cs.LG | The proposal is to use clusters, graphs and networks as models in order to
analyse the Web structure. Clusters, graphs and networks provide knowledge
representation and organization. Clusters were generated by co-site analysis.
The sample is a set of academic Web sites from the countries belonging to the
European Union. These clusters are here revisited from the point of view of
graph theory and social network analysis. This is a quantitative and structural
analysis. In fact, the Internet is a computer network that connects people and
organizations. Thus we may consider it to be a social network. The set of Web
academic sites represents an empirical social network, and is viewed as a
virtual community. The network structural properties are here analysed applying
together cluster analysis, graph theory and social network analysis.
| Xavier Polanco (INIST) | null | 0707.1452 | null | null |
Universal Reinforcement Learning | cs.IT cs.LG math.IT | We consider an agent interacting with an unmodeled environment. At each time,
the agent makes an observation, takes an action, and incurs a cost. Its actions
can influence future observations and costs. The goal is to minimize the
long-term average cost. We propose a novel algorithm, known as the active LZ
algorithm, for optimal control based on ideas from the Lempel-Ziv scheme for
universal data compression and prediction. We establish that, under the active
LZ algorithm, if there exists an integer $K$ such that the future is
conditionally independent of the past given a window of $K$ consecutive actions
and observations, then the average cost converges to the optimum. Experimental
results involving the game of Rock-Paper-Scissors illustrate merits of the
algorithm.
| Vivek F. Farias, Ciamac C. Moallemi, Tsachy Weissman, Benjamin Van Roy | null | 0707.3087 | null | null |
Consistency of the group Lasso and multiple kernel learning | cs.LG | We consider the least-square regression problem with regularization by a
block 1-norm, i.e., a sum of Euclidean norms over spaces of dimensions larger
than one. This problem, referred to as the group Lasso, extends the usual
regularization by the 1-norm where all spaces have dimension one, where it is
commonly referred to as the Lasso. In this paper, we study the asymptotic model
consistency of the group Lasso. We derive necessary and sufficient conditions
for the consistency of group Lasso under practical assumptions, such as model
misspecification. When the linear predictors and Euclidean norms are replaced
by functions and reproducing kernel Hilbert norms, the problem is usually
referred to as multiple kernel learning and is commonly used for learning from
heterogeneous data sources and for non linear variable selection. Using tools
from functional analysis, and in particular covariance operators, we extend the
consistency results to this infinite dimensional case and also propose an
adaptive scheme to obtain a consistent model estimate, even when the necessary
condition required for the non adaptive scheme is not satisfied.
| Francis Bach (WILLOW Project - Inria/Ens) | null | 0707.3390 | null | null |
Quantum Algorithms for Learning and Testing Juntas | quant-ph cs.LG | In this article we develop quantum algorithms for learning and testing
juntas, i.e. Boolean functions which depend only on an unknown set of k out of
n input variables. Our aim is to develop efficient algorithms:
- whose sample complexity has no dependence on n, the dimension of the domain
the Boolean functions are defined over;
- with no access to any classical or quantum membership ("black-box")
queries. Instead, our algorithms use only classical examples generated
uniformly at random and fixed quantum superpositions of such classical
examples;
- which require only a few quantum examples but possibly many classical
random examples (which are considered quite "cheap" relative to quantum
examples).
Our quantum algorithms are based on a subroutine FS which enables sampling
according to the Fourier spectrum of f; the FS subroutine was used in earlier
work of Bshouty and Jackson on quantum learning. Our results are as follows:
- We give an algorithm for testing k-juntas to accuracy $\epsilon$ that uses
$O(k/\epsilon)$ quantum examples. This improves on the number of examples used
by the best known classical algorithm.
- We establish the following lower bound: any FS-based k-junta testing
algorithm requires $\Omega(\sqrt{k})$ queries.
- We give an algorithm for learning $k$-juntas to accuracy $\epsilon$ that
uses $O(\epsilon^{-1} k\log k)$ quantum examples and $O(2^k \log(1/\epsilon))$
random examples. We show that this learning algorithms is close to optimal by
giving a related lower bound.
| Alp Atici, Rocco A. Servedio | 10.1007/s11128-007-0061-6 | 0707.3479 | null | null |
Virtual screening with support vector machines and structure kernels | q-bio.QM cs.LG | Support vector machines and kernel methods have recently gained considerable
attention in chemoinformatics. They offer generally good performance for
problems of supervised classification or regression, and provide a flexible and
computationally efficient framework to include relevant information and prior
knowledge about the data and problems to be handled. In particular, with kernel
methods molecules do not need to be represented and stored explicitly as
vectors or fingerprints, but only to be compared to each other through a
comparison function technically called a kernel. While classical kernels can be
used to compare vector or fingerprint representations of molecules, completely
new kernels were developed in the recent years to directly compare the 2D or 3D
structures of molecules, without the need for an explicit vectorization step
through the extraction of molecular descriptors. While still in their infancy,
these approaches have already demonstrated their relevance on several toxicity
prediction and structure-activity relationship problems.
| Pierre Mah\'e (XRCE), Jean-Philippe Vert (CB) | null | 0708.0171 | null | null |
Structure or Noise? | physics.data-an cond-mat.stat-mech cs.IT cs.LG math-ph math.IT math.MP math.ST nlin.CD stat.TH | We show how rate-distortion theory provides a mechanism for automated theory
building by naturally distinguishing between regularity and randomness. We
start from the simple principle that model variables should, as much as
possible, render the future and past conditionally independent. From this, we
construct an objective function for model making whose extrema embody the
trade-off between a model's structural complexity and its predictive power. The
solutions correspond to a hierarchy of models that, at each level of
complexity, achieve optimal predictive power at minimal cost. In the limit of
maximal prediction the resulting optimal model identifies a process's intrinsic
organization by extracting the underlying causal states. In this limit, the
model's complexity is given by the statistical complexity, which is known to be
minimal for achieving maximum prediction. Examples show how theory building can
profit from analyzing a process's causal compressibility, which is reflected in
the optimal models' rate-distortion curve--the process's characteristic for
optimally balancing structure and noise at different levels of representation.
| Susanne Still, James P. Crutchfield | null | 0708.0654 | null | null |
Cost-minimising strategies for data labelling : optimal stopping and
active learning | cs.LG | Supervised learning deals with the inference of a distribution over an output
or label space $\CY$ conditioned on points in an observation space $\CX$, given
a training dataset $D$ of pairs in $\CX \times \CY$. However, in a lot of
applications of interest, acquisition of large amounts of observations is easy,
while the process of generating labels is time-consuming or costly. One way to
deal with this problem is {\em active} learning, where points to be labelled
are selected with the aim of creating a model with better performance than that
of an model trained on an equal number of randomly sampled points. In this
paper, we instead propose to deal with the labelling cost directly: The
learning goal is defined as the minimisation of a cost which is a function of
the expected model performance and the total cost of the labels used. This
allows the development of general strategies and specific algorithms for (a)
optimal stopping, where the expected cost dictates whether label acquisition
should continue (b) empirical evaluation, where the cost is used as a
performance metric for a given combination of inference, stopping and sampling
methods. Though the main focus of the paper is optimal stopping, we also aim to
provide the background for further developments and discussion in the related
field of active learning.
| Christos Dimitrakakis and Christian Savu-Krohn | null | 0708.1242 | null | null |
Defensive forecasting for optimal prediction with expert advice | cs.LG | The method of defensive forecasting is applied to the problem of prediction
with expert advice for binary outcomes. It turns out that defensive forecasting
is not only competitive with the Aggregating Algorithm but also handles the
case of "second-guessing" experts, whose advice depends on the learner's
prediction; this paper assumes that the dependence on the learner's prediction
is continuous.
| Vladimir Vovk | null | 0708.1503 | null | null |
Optimal Causal Inference: Estimating Stored Information and
Approximating Causal Architecture | cs.IT cond-mat.stat-mech cs.LG math.IT math.ST stat.TH | We introduce an approach to inferring the causal architecture of stochastic
dynamical systems that extends rate distortion theory to use causal
shielding---a natural principle of learning. We study two distinct cases of
causal inference: optimal causal filtering and optimal causal estimation.
Filtering corresponds to the ideal case in which the probability distribution
of measurement sequences is known, giving a principled method to approximate a
system's causal structure at a desired level of representation. We show that,
in the limit in which a model complexity constraint is relaxed, filtering finds
the exact causal architecture of a stochastic dynamical system, known as the
causal-state partition. From this, one can estimate the amount of historical
information the process stores. More generally, causal filtering finds a graded
model-complexity hierarchy of approximations to the causal architecture. Abrupt
changes in the hierarchy, as a function of approximation, capture distinct
scales of structural organization.
For nonideal cases with finite data, we show how the correct number of
underlying causal states can be found by optimal causal estimation. A
previously derived model complexity control term allows us to correct for the
effect of statistical fluctuations in probability estimates and thereby avoid
over-fitting.
| Susanne Still, James P. Crutchfield, Christopher J. Ellison | null | 0708.1580 | null | null |
On Semimeasures Predicting Martin-Loef Random Sequences | cs.IT cs.LG math.IT math.PR | Solomonoff's central result on induction is that the posterior of a universal
semimeasure M converges rapidly and with probability 1 to the true sequence
generating posterior mu, if the latter is computable. Hence, M is eligible as a
universal sequence predictor in case of unknown mu. Despite some nearby results
and proofs in the literature, the stronger result of convergence for all
(Martin-Loef) random sequences remained open. Such a convergence result would
be particularly interesting and natural, since randomness can be defined in
terms of M itself. We show that there are universal semimeasures M which do not
converge for all random sequences, i.e. we give a partial negative answer to
the open problem. We also provide a positive answer for some non-universal
semimeasures. We define the incomputable measure D as a mixture over all
computable measures and the enumerable semimeasure W as a mixture over all
enumerable nearly-measures. We show that W converges to D and D to mu on all
random sequences. The Hellinger distance measuring closeness of two
distributions plays a central role.
| Marcus Hutter and Andrej Muchnik | null | 0708.2319 | null | null |
Continuous and randomized defensive forecasting: unified view | cs.LG | Defensive forecasting is a method of transforming laws of probability (stated
in game-theoretic terms as strategies for Sceptic) into forecasting algorithms.
There are two known varieties of defensive forecasting: "continuous", in which
Sceptic's moves are assumed to depend on the forecasts in a (semi)continuous
manner and which produces deterministic forecasts, and "randomized", in which
the dependence of Sceptic's moves on the forecasts is arbitrary and
Forecaster's moves are allowed to be randomized. This note shows that the
randomized variety can be obtained from the continuous variety by smearing
Sceptic's moves to make them continuous.
| Vladimir Vovk | null | 0708.2353 | null | null |
A Dichotomy Theorem for General Minimum Cost Homomorphism Problem | cs.LG cs.CC | In the constraint satisfaction problem ($CSP$), the aim is to find an
assignment of values to a set of variables subject to specified constraints. In
the minimum cost homomorphism problem ($MinHom$), one is additionally given
weights $c_{va}$ for every variable $v$ and value $a$, and the aim is to find
an assignment $f$ to the variables that minimizes $\sum_{v} c_{vf(v)}$. Let
$MinHom(\Gamma)$ denote the $MinHom$ problem parameterized by the set of
predicates allowed for constraints. $MinHom(\Gamma)$ is related to many
well-studied combinatorial optimization problems, and concrete applications can
be found in, for instance, defence logistics and machine learning. We show that
$MinHom(\Gamma)$ can be studied by using algebraic methods similar to those
used for CSPs. With the aid of algebraic techniques, we classify the
computational complexity of $MinHom(\Gamma)$ for all choices of $\Gamma$. Our
result settles a general dichotomy conjecture previously resolved only for
certain classes of directed graphs, [Gutin, Hell, Rafiey, Yeo, European J. of
Combinatorics, 2008].
| Rustem Takhanov | null | 0708.3226 | null | null |
Filtering Additive Measurement Noise with Maximum Entropy in the Mean | cs.LG | The purpose of this note is to show how the method of maximum entropy in the
mean (MEM) may be used to improve parametric estimation when the measurements
are corrupted by large level of noise. The method is developed in the context
on a concrete example: that of estimation of the parameter in an exponential
distribution. We compare the performance of our method with the bayesian and
maximum likelihood approaches.
| Henryk Gzyl and Enrique ter Horst | null | 0709.0509 | null | null |
On Universal Prediction and Bayesian Confirmation | math.ST cs.IT cs.LG math.IT stat.ML stat.TH | The Bayesian framework is a well-studied and successful framework for
inductive reasoning, which includes hypothesis testing and confirmation,
parameter estimation, sequence prediction, classification, and regression. But
standard statistical guidelines for choosing the model class and prior are not
always available or fail, in particular in complex situations. Solomonoff
completed the Bayesian framework by providing a rigorous, unique, formal, and
universal choice for the model class and the prior. We discuss in breadth how
and in which sense universal (non-i.i.d.) sequence prediction solves various
(philosophical) problems of traditional Bayesian sequence prediction. We show
that Solomonoff's model possesses many desirable properties: Strong total and
weak instantaneous bounds, and in contrast to most classical continuous prior
densities has no zero p(oste)rior problem, i.e. can confirm universal
hypotheses, is reparametrization and regrouping invariant, and avoids the
old-evidence and updating problem. It even performs well (actually better) in
non-computable environments.
| Marcus Hutter | null | 0709.1516 | null | null |
Learning for Dynamic Bidding in Cognitive Radio Resources | cs.LG cs.GT | In this paper, we model the various wireless users in a cognitive radio
network as a collection of selfish, autonomous agents that strategically
interact in order to acquire the dynamically available spectrum opportunities.
Our main focus is on developing solutions for wireless users to successfully
compete with each other for the limited and time-varying spectrum
opportunities, given the experienced dynamics in the wireless network. We
categorize these dynamics into two types: one is the disturbance due to the
environment (e.g. wireless channel conditions, source traffic characteristics,
etc.) and the other is the impact caused by competing users. To analyze the
interactions among users given the environment disturbance, we propose a
general stochastic framework for modeling how the competition among users for
spectrum opportunities evolves over time. At each stage of the dynamic resource
allocation, a central spectrum moderator auctions the available resources and
the users strategically bid for the required resources. The joint bid actions
affect the resource allocation and hence, the rewards and future strategies of
all users. Based on the observed resource allocation and corresponding rewards
from previous allocations, we propose a best response learning algorithm that
can be deployed by wireless users to improve their bidding policy at each
stage. The simulation results show that by deploying the proposed best response
learning algorithm, the wireless users can significantly improve their own
performance in terms of both the packet loss rate and the incurred cost for the
used resources.
| Fangwen Fu, Mihaela van der Schaar | null | 0709.2446 | null | null |
Mutual information for the selection of relevant variables in
spectrometric nonlinear modelling | cs.LG cs.NE stat.AP | Data from spectrophotometers form vectors of a large number of exploitable
variables. Building quantitative models using these variables most often
requires using a smaller set of variables than the initial one. Indeed, a too
large number of input variables to a model results in a too large number of
parameters, leading to overfitting and poor generalization abilities. In this
paper, we suggest the use of the mutual information measure to select variables
from the initial set. The mutual information measures the information content
in input variables with respect to the model output, without making any
assumption on the model that will be used; it is thus suitable for nonlinear
modelling. In addition, it leads to the selection of variables among the
initial set, and not to linear or nonlinear combinations of them. Without
decreasing the model performances compared to other variable projection
methods, it allows therefore a greater interpretability of the results.
| Fabrice Rossi (INRIA Rocquencourt / INRIA Sophia Antipolis), Amaury
Lendasse (CIS), Damien Fran\c{c}ois (CESAME), Vincent Wertz (CESAME), Michel
Verleysen (DICE - MLG) | 10.1016/j.chemolab.2005.06.010 | 0709.3427 | null | null |
Fast Algorithm and Implementation of Dissimilarity Self-Organizing Maps | cs.NE cs.LG | In many real world applications, data cannot be accurately represented by
vectors. In those situations, one possible solution is to rely on dissimilarity
measures that enable sensible comparison between observations. Kohonen's
Self-Organizing Map (SOM) has been adapted to data described only through their
dissimilarity matrix. This algorithm provides both non linear projection and
clustering of non vector data. Unfortunately, the algorithm suffers from a high
cost that makes it quite difficult to use with voluminous data sets. In this
paper, we propose a new algorithm that provides an important reduction of the
theoretical cost of the dissimilarity SOM without changing its outcome (the
results are exactly the same as the ones obtained with the original algorithm).
Moreover, we introduce implementation methods that result in very short running
times. Improvements deduced from the theoretical cost model are validated on
simulated and real world data (a word list clustering problem). We also
demonstrate that the proposed implementation methods reduce by a factor up to 3
the running time of the fast algorithm over a standard implementation.
| Brieuc Conan-Guez (LITA), Fabrice Rossi (INRIA Rocquencourt / INRIA
Sophia Antipolis), A\"icha El Golli (INRIA Rocquencourt / INRIA Sophia
Antipolis) | 10.1016/j.neunet.2006.05.002 | 0709.3461 | null | null |
Une adaptation des cartes auto-organisatrices pour des donn\'ees
d\'ecrites par un tableau de dissimilarit\'es | cs.NE cs.LG | Many data analysis methods cannot be applied to data that are not represented
by a fixed number of real values, whereas most of real world observations are
not readily available in such a format. Vector based data analysis methods have
therefore to be adapted in order to be used with non standard complex data. A
flexible and general solution for this adaptation is to use a (dis)similarity
measure. Indeed, thanks to expert knowledge on the studied data, it is
generally possible to define a measure that can be used to make pairwise
comparison between observations. General data analysis methods are then
obtained by adapting existing methods to (dis)similarity matrices. In this
article, we propose an adaptation of Kohonen's Self Organizing Map (SOM) to
(dis)similarity data. The proposed algorithm is an adapted version of the
vector based batch SOM. The method is validated on real world data: we provide
an analysis of the usage patterns of the web site of the Institut National de
Recherche en Informatique et Automatique, constructed thanks to web log mining
method.
| A\"icha El Golli (INRIA Rocquencourt / INRIA Sophia Antipolis),
Fabrice Rossi (INRIA Rocquencourt / INRIA Sophia Antipolis), Brieuc
Conan-Guez (LITA), Yves Lechevallier (INRIA Rocquencourt / INRIA Sophia
Antipolis) | null | 0709.3586 | null | null |
Self-organizing maps and symbolic data | cs.NE cs.LG | In data analysis new forms of complex data have to be considered like for
example (symbolic data, functional data, web data, trees, SQL query and
multimedia data, ...). In this context classical data analysis for knowledge
discovery based on calculating the center of gravity can not be used because
input are not $\mathbb{R}^p$ vectors. In this paper, we present an application
on real world symbolic data using the self-organizing map. To this end, we
propose an extension of the self-organizing map that can handle symbolic data.
| A\"icha El Golli (INRIA Rocquencourt / INRIA Sophia Antipolis), Brieuc
Conan-Guez (INRIA Rocquencourt / INRIA Sophia Antipolis), Fabrice Rossi
(INRIA Rocquencourt / INRIA Sophia Antipolis) | null | 0709.3587 | null | null |
Fast Selection of Spectral Variables with B-Spline Compression | cs.LG stat.AP | The large number of spectral variables in most data sets encountered in
spectral chemometrics often renders the prediction of a dependent variable
uneasy. The number of variables hopefully can be reduced, by using either
projection techniques or selection methods; the latter allow for the
interpretation of the selected variables. Since the optimal approach of testing
all possible subsets of variables with the prediction model is intractable, an
incremental selection approach using a nonparametric statistics is a good
option, as it avoids the computationally intensive use of the model itself. It
has two drawbacks however: the number of groups of variables to test is still
huge, and colinearities can make the results unstable. To overcome these
limitations, this paper presents a method to select groups of spectral
variables. It consists in a forward-backward procedure applied to the
coefficients of a B-Spline representation of the spectra. The criterion used in
the forward-backward procedure is the mutual information, allowing to find
nonlinear dependencies between variables, on the contrary of the generally used
correlation. The spline representation is used to get interpretability of the
results, as groups of consecutive spectral variables will be selected. The
experiments conducted on NIR spectra from fescue grass and diesel fuels show
that the method provides clearly identified groups of selected variables,
making interpretation easy, while keeping a low computational load. The
prediction performances obtained using the selected coefficients are higher
than those obtained by the same method applied directly to the original
variables and similar to those obtained using traditional models, although
using significantly less spectral variables.
| Fabrice Rossi (INRIA Rocquencourt / INRIA Sophia Antipolis), Damien
Fran\c{c}ois (CESAME), Vincent Wertz (CESAME), Marc Meurens (BNUT), Michel
Verleysen (DICE - MLG) | 10.1016/j.chemolab.2006.06.007 | 0709.3639 | null | null |
Resampling methods for parameter-free and robust feature selection with
mutual information | cs.LG stat.AP | Combining the mutual information criterion with a forward feature selection
strategy offers a good trade-off between optimality of the selected feature
subset and computation time. However, it requires to set the parameter(s) of
the mutual information estimator and to determine when to halt the forward
procedure. These two choices are difficult to make because, as the
dimensionality of the subset increases, the estimation of the mutual
information becomes less and less reliable. This paper proposes to use
resampling methods, a K-fold cross-validation and the permutation test, to
address both issues. The resampling methods bring information about the
variance of the estimator, information which can then be used to automatically
set the parameter and to calculate a threshold to stop the forward procedure.
The procedure is illustrated on a synthetic dataset as well as on real-world
examples.
| Damien Fran\c{c}ois (CESAME), Fabrice Rossi (INRIA Rocquencourt /
INRIA Sophia Antipolis), Vincent Wertz (CESAME), Michel Verleysen (DICE -
MLG) | 10.1016/j.neucom.2006.11.019 | 0709.3640 | null | null |
Evolving Classifiers: Methods for Incremental Learning | cs.LG cs.AI cs.NE | The ability of a classifier to take on new information and classes by
evolving the classifier without it having to be fully retrained is known as
incremental learning. Incremental learning has been successfully applied to
many classification problems, where the data is changing and is not all
available at once. In this paper there is a comparison between Learn++, which
is one of the most recent incremental learning algorithms, and the new proposed
method of Incremental Learning Using Genetic Algorithm (ILUGA). Learn++ has
shown good incremental learning capabilities on benchmark datasets on which the
new ILUGA method has been tested. ILUGA has also shown good incremental
learning ability using only a few classifiers and does not suffer from
catastrophic forgetting. The results obtained for ILUGA on the Optical
Character Recognition (OCR) and Wine datasets are good, with an overall
accuracy of 93% and 94% respectively showing a 4% improvement over Learn++.MT
for the difficult multi-class OCR dataset.
| Greg Hulley and Tshilidzi Marwala | null | 0709.3965 | null | null |
Classification of Images Using Support Vector Machines | cs.LG cs.AI | Support Vector Machines (SVMs) are a relatively new supervised classification
technique to the land cover mapping community. They have their roots in
Statistical Learning Theory and have gained prominence because they are robust,
accurate and are effective even when using a small training sample. By their
nature SVMs are essentially binary classifiers, however, they can be adopted to
handle the multiple classification tasks common in remote sensing studies. The
two approaches commonly used are the One-Against-One (1A1) and One-Against-All
(1AA) techniques. In this paper, these approaches are evaluated in as far as
their impact and implication for land cover mapping. The main finding from this
research is that whereas the 1AA technique is more predisposed to yielding
unclassified and mixed pixels, the resulting classification accuracy is not
significantly different from 1A1 approach. It is the authors conclusions that
ultimately the choice of technique adopted boils down to personal preference
and the uniqueness of the dataset at hand.
| Gidudu Anthony, Hulley Greg and Marwala Tshilidzi | null | 0709.3967 | null | null |
Prediction with expert advice for the Brier game | cs.LG | We show that the Brier game of prediction is mixable and find the optimal
learning rate and substitution function for it. The resulting prediction
algorithm is applied to predict results of football and tennis matches. The
theoretical performance guarantee turns out to be rather tight on these data
sets, especially in the case of the more extensive tennis data.
| Vladimir Vovk and Fedor Zhdanov | null | 0710.0485 | null | null |
Association Rules in the Relational Calculus | cs.DB cs.LG cs.LO | One of the most utilized data mining tasks is the search for association
rules. Association rules represent significant relationships between items in
transactions. We extend the concept of association rule to represent a much
broader class of associations, which we refer to as \emph{entity-relationship
rules.} Semantically, entity-relationship rules express associations between
properties of related objects. Syntactically, these rules are based on a broad
subclass of safe domain relational calculus queries. We propose a new
definition of support and confidence for entity-relationship rules and for the
frequency of entity-relationship queries. We prove that the definition of
frequency satisfies standard probability axioms and the Apriori property.
| Oliver Schulte, Flavia Moser, Martin Ester and Zhiyong Lu | null | 0710.2083 | null | null |
The structure of verbal sequences analyzed with unsupervised learning
techniques | cs.CL cs.AI cs.LG | Data mining allows the exploration of sequences of phenomena, whereas one
usually tends to focus on isolated phenomena or on the relation between two
phenomena. It offers invaluable tools for theoretical analyses and exploration
of the structure of sentences, texts, dialogues, and speech. We report here the
results of an attempt at using it for inspecting sequences of verbs from French
accounts of road accidents. This analysis comes from an original approach of
unsupervised training allowing the discovery of the structure of sequential
data. The entries of the analyzer were only made of the verbs appearing in the
sentences. It provided a classification of the links between two successive
verbs into four distinct clusters, allowing thus text segmentation. We give
here an interpretation of these clusters by applying a statistical analysis to
independent semantic annotations.
| Catherine Recanati (LIPN), Nicoleta Rogovschi (LIPN), Youn\`es Bennani
(LIPN) | null | 0710.2446 | null | null |
Consistency of trace norm minimization | cs.LG | Regularization by the sum of singular values, also referred to as the trace
norm, is a popular technique for estimating low rank rectangular matrices. In
this paper, we extend some of the consistency results of the Lasso to provide
necessary and sufficient conditions for rank consistency of trace norm
minimization with the square loss. We also provide an adaptive version that is
rank consistent even when the necessary condition for the non adaptive version
is not fulfilled.
| Francis Bach (WILLOW Project - Inria/Ens) | null | 0710.2848 | null | null |
An efficient reduction of ranking to classification | cs.LG cs.IR | This paper describes an efficient reduction of the learning problem of
ranking to binary classification. The reduction guarantees an average pairwise
misranking regret of at most that of the binary classifier regret, improving a
recent result of Balcan et al which only guarantees a factor of 2. Moreover,
our reduction applies to a broader class of ranking loss functions, admits a
simpler proof, and the expected running time complexity of our algorithm in
terms of number of calls to a classifier or preference function is improved
from $\Omega(n^2)$ to $O(n \log n)$. In addition, when the top $k$ ranked
elements only are required ($k \ll n$), as in many applications in information
extraction or search engines, the time complexity of our algorithm can be
further reduced to $O(k \log k + n)$. Our reduction and algorithm are thus
practical for realistic applications where the number of points to rank exceeds
several thousands. Much of our results also extend beyond the bipartite case
previously studied.
Our rediction is a randomized one. To complement our result, we also derive
lower bounds on any deterministic reduction from binary (preference)
classification to ranking, implying that our use of a randomized reduction is
essentially necessary for the guarantees we provide.
| Nir Ailon and Mehryar Mohri | null | 0710.2889 | null | null |
Combining haplotypers | cs.LG cs.CE q-bio.QM | Statistically resolving the underlying haplotype pair for a genotype
measurement is an important intermediate step in gene mapping studies, and has
received much attention recently. Consequently, a variety of methods for this
problem have been developed. Different methods employ different statistical
models, and thus implicitly encode different assumptions about the nature of
the underlying haplotype structure. Depending on the population sample in
question, their relative performance can vary greatly, and it is unclear which
method to choose for a particular sample. Instead of choosing a single method,
we explore combining predictions returned by different methods in a principled
way, and thereby circumvent the problem of method selection.
We propose several techniques for combining haplotype reconstructions and
analyze their computational properties. In an experimental study on real-world
haplotype data we show that such techniques can provide more accurate and
robust reconstructions, and are useful for outlier detection. Typically, the
combined prediction is at least as accurate as or even more accurate than the
best individual method, effectively circumventing the method selection problem.
| Matti K\"a\"ari\"ainen, Niels Landwehr, Sampsa Lappalainen and Taneli
Mielik\"ainen | null | 0710.5116 | null | null |
A Tutorial on Spectral Clustering | cs.DS cs.LG | In recent years, spectral clustering has become one of the most popular
modern clustering algorithms. It is simple to implement, can be solved
efficiently by standard linear algebra software, and very often outperforms
traditional clustering algorithms such as the k-means algorithm. On the first
glance spectral clustering appears slightly mysterious, and it is not obvious
to see why it works at all and what it really does. The goal of this tutorial
is to give some intuition on those questions. We describe different graph
Laplacians and their basic properties, present the most common spectral
clustering algorithms, and derive those algorithms from scratch by several
different approaches. Advantages and disadvantages of the different spectral
clustering algorithms are discussed.
| Ulrike von Luxburg | null | 0711.0189 | null | null |
Building Rules on Top of Ontologies for the Semantic Web with Inductive
Logic Programming | cs.AI cs.LG | Building rules on top of ontologies is the ultimate goal of the logical layer
of the Semantic Web. To this aim an ad-hoc mark-up language for this layer is
currently under discussion. It is intended to follow the tradition of hybrid
knowledge representation and reasoning systems such as $\mathcal{AL}$-log that
integrates the description logic $\mathcal{ALC}$ and the function-free Horn
clausal language \textsc{Datalog}. In this paper we consider the problem of
automating the acquisition of these rules for the Semantic Web. We propose a
general framework for rule induction that adopts the methodological apparatus
of Inductive Logic Programming and relies on the expressive and deductive power
of $\mathcal{AL}$-log. The framework is valid whatever the scope of induction
(description vs. prediction) is. Yet, for illustrative purposes, we also
discuss an instantiation of the framework which aims at description and turns
out to be useful in Ontology Refinement.
Keywords: Inductive Logic Programming, Hybrid Knowledge Representation and
Reasoning Systems, Ontologies, Semantic Web.
Note: To appear in Theory and Practice of Logic Programming (TPLP)
| Francesca A. Lisi | null | 0711.1814 | null | null |
Empirical Evaluation of Four Tensor Decomposition Algorithms | cs.LG cs.CL cs.IR | Higher-order tensor decompositions are analogous to the familiar Singular
Value Decomposition (SVD), but they transcend the limitations of matrices
(second-order tensors). SVD is a powerful tool that has achieved impressive
results in information retrieval, collaborative filtering, computational
linguistics, computational vision, and other fields. However, SVD is limited to
two-dimensional arrays of data (two modes), and many potential applications
have three or more modes, which require higher-order tensor decompositions.
This paper evaluates four algorithms for higher-order tensor decomposition:
Higher-Order Singular Value Decomposition (HO-SVD), Higher-Order Orthogonal
Iteration (HOOI), Slice Projection (SP), and Multislice Projection (MP). We
measure the time (elapsed run time), space (RAM and disk space requirements),
and fit (tensor reconstruction accuracy) of the four algorithms, under a
variety of conditions. We find that standard implementations of HO-SVD and HOOI
do not scale up to larger tensors, due to increasing RAM requirements. We
recommend HOOI for tensors that are small enough for the available RAM and MP
for larger tensors.
| Peter D. Turney (National Research Council of Canada) | null | 0711.2023 | null | null |
Inverse Sampling for Nonasymptotic Sequential Estimation of Bounded
Variable Means | math.ST cs.LG math.PR stat.TH | In this paper, we consider the nonasymptotic sequential estimation of means
of random variables bounded in between zero and one. We have rigorously
demonstrated that, in order to guarantee prescribed relative precision and
confidence level, it suffices to continue sampling until the sample sum is no
less than a certain bound and then take the average of samples as an estimate
for the mean of the bounded random variable. We have developed an explicit
formula and a bisection search method for the determination of such bound of
sample sum, without any knowledge of the bounded variable. Moreover, we have
derived bounds for the distribution of sample size. In the special case of
Bernoulli random variables, we have established analytical and numerical
methods to further reduce the bound of sample sum and thus improve the
efficiency of sampling. Furthermore, the fallacy of existing results are
detected and analyzed.
| Xinjia Chen | null | 0711.2801 | null | null |
Image Classification Using SVMs: One-against-One Vs One-against-All | cs.LG cs.AI cs.CV | Support Vector Machines (SVMs) are a relatively new supervised classification
technique to the land cover mapping community. They have their roots in
Statistical Learning Theory and have gained prominence because they are robust,
accurate and are effective even when using a small training sample. By their
nature SVMs are essentially binary classifiers, however, they can be adopted to
handle the multiple classification tasks common in remote sensing studies. The
two approaches commonly used are the One-Against-One (1A1) and One-Against-All
(1AA) techniques. In this paper, these approaches are evaluated in as far as
their impact and implication for land cover mapping. The main finding from this
research is that whereas the 1AA technique is more predisposed to yielding
unclassified and mixed pixels, the resulting classification accuracy is not
significantly different from 1A1 approach. It is the authors conclusion
therefore that ultimately the choice of technique adopted boils down to
personal preference and the uniqueness of the dataset at hand.
| Gidudu Anthony, Hulley Gregg and Marwala Tshilidzi | null | 0711.2914 | null | null |
Clustering with Transitive Distance and K-Means Duality | cs.LG | Recent spectral clustering methods are a propular and powerful technique for
data clustering. These methods need to solve the eigenproblem whose
computational complexity is $O(n^3)$, where $n$ is the number of data samples.
In this paper, a non-eigenproblem based clustering method is proposed to deal
with the clustering problem. Its performance is comparable to the spectral
clustering algorithms but it is more efficient with computational complexity
$O(n^2)$. We show that with a transitive distance and an observed property,
called K-means duality, our algorithm can be used to handle data sets with
complex cluster shapes, multi-scale clusters, and noise. Moreover, no
parameters except the number of clusters need to be set in our algorithm.
| Chunjing Xu, Jianzhuang Liu, Xiaoou Tang | null | 0711.3594 | null | null |
Derivations of Normalized Mutual Information in Binary Classifications | cs.LG cs.IT math.IT | This correspondence studies the basic problem of classifications - how to
evaluate different classifiers. Although the conventional performance indexes,
such as accuracy, are commonly used in classifier selection or evaluation,
information-based criteria, such as mutual information, are becoming popular in
feature/model selections. In this work, we propose to assess classifiers in
terms of normalized mutual information (NI), which is novel and well defined in
a compact range for classifier evaluation. We derive close-form relations of
normalized mutual information with respect to accuracy, precision, and recall
in binary classifications. By exploring the relations among them, we reveal
that NI is actually a set of nonlinear functions, with a concordant
power-exponent form, to each performance index. The relations can also be
expressed with respect to precision and recall, or to false alarm and hitting
rate (recall).
| Yong Wang, Bao-Gang Hu | null | 0711.3675 | null | null |
Covariance and PCA for Categorical Variables | cs.LG | Covariances from categorical variables are defined using a regular simplex
expression for categories. The method follows the variance definition by Gini,
and it gives the covariance as a solution of simultaneous equations. The
calculated results give reasonable values for test data. A method of principal
component analysis (RS-PCA) is also proposed using regular simplex expressions,
which allows easy interpretation of the principal components. The proposed
methods apply to variable selection problem of categorical data USCensus1990
data. The proposed methods give appropriate criterion for the variable
selection problem of categorical
| Hirotaka Niitsuma and Takashi Okada | null | 0711.4452 | null | null |
On the Relationship between the Posterior and Optimal Similarity | cs.LG | For a classification problem described by the joint density $P(\omega,x)$,
models of $P(\omega\eq\omega'|x,x')$ (the ``Bayesian similarity measure'') have
been shown to be an optimal similarity measure for nearest neighbor
classification. This paper analyzes demonstrates several additional properties
of that conditional distribution. The paper first shows that we can
reconstruct, up to class labels, the class posterior distribution $P(\omega|x)$
given $P(\omega\eq\omega'|x,x')$, gives a procedure for recovering the class
labels, and gives an asymptotically Bayes-optimal classification procedure. It
also shows, given such an optimal similarity measure, how to construct a
classifier that outperforms the nearest neighbor classifier and achieves
Bayes-optimal classification rates. The paper then analyzes Bayesian similarity
in a framework where a classifier faces a number of related classification
tasks (multitask learning) and illustrates that reconstruction of the class
posterior distribution is not possible in general. Finally, the paper
identifies a distinct class of classification problems using
$P(\omega\eq\omega'|x,x')$ and shows that using $P(\omega\eq\omega'|x,x')$ to
solve those problems is the Bayes optimal solution.
| Thomas M. Breuel | null | 0712.0130 | null | null |
A Reactive Tabu Search Algorithm for Stimuli Generation in
Psycholinguistics | cs.AI cs.CC cs.DM cs.LG | The generation of meaningless "words" matching certain statistical and/or
linguistic criteria is frequently needed for experimental purposes in
Psycholinguistics. Such stimuli receive the name of pseudowords or nonwords in
the Cognitive Neuroscience literatue. The process for building nonwords
sometimes has to be based on linguistic units such as syllables or morphemes,
resulting in a numerical explosion of combinations when the size of the
nonwords is increased. In this paper, a reactive tabu search scheme is proposed
to generate nonwords of variables size. The approach builds pseudowords by
using a modified Metaheuristic algorithm based on a local search procedure
enhanced by a feedback-based scheme. Experimental results show that the new
algorithm is a practical and effective tool for nonword generation.
| Alejandro Chinea Manrique De Lara | null | 0712.0451 | null | null |
Equations of States in Singular Statistical Estimation | cs.LG | Learning machines which have hierarchical structures or hidden variables are
singular statistical models because they are nonidentifiable and their Fisher
information matrices are singular. In singular statistical models, neither the
Bayes a posteriori distribution converges to the normal distribution nor the
maximum likelihood estimator satisfies asymptotic normality. This is the main
reason why it has been difficult to predict their generalization performances
from trained states. In this paper, we study four errors, (1) Bayes
generalization error, (2) Bayes training error, (3) Gibbs generalization error,
and (4) Gibbs training error, and prove that there are mathematical relations
among these errors. The formulas proved in this paper are equations of states
in statistical estimation because they hold for any true distribution, any
parametric model, and any a priori distribution. Also we show that Bayes and
Gibbs generalization errors are estimated by Bayes and Gibbs training errors,
and propose widely applicable information criteria which can be applied to both
regular and singular statistical models.
| Sumio Watanabe | null | 0712.0653 | null | null |
A Universal Kernel for Learning Regular Languages | cs.LG cs.DM | We give a universal kernel that renders all the regular languages linearly
separable. We are not able to compute this kernel efficiently and conjecture
that it is intractable, but we do have an efficient $\eps$-approximation.
| Leonid (Aryeh) Kontorovich | null | 0712.0840 | null | null |
Automatic Pattern Classification by Unsupervised Learning Using
Dimensionality Reduction of Data with Mirroring Neural Networks | cs.LG cs.AI cs.NE | This paper proposes an unsupervised learning technique by using Multi-layer
Mirroring Neural Network and Forgy's clustering algorithm. Multi-layer
Mirroring Neural Network is a neural network that can be trained with
generalized data inputs (different categories of image patterns) to perform
non-linear dimensionality reduction and the resultant low-dimensional code is
used for unsupervised pattern classification using Forgy's algorithm. By
adapting the non-linear activation function (modified sigmoidal function) and
initializing the weights and bias terms to small random values, mirroring of
the input pattern is initiated. In training, the weights and bias terms are
changed in such a way that the input presented is reproduced at the output by
back propagating the error. The mirroring neural network is capable of reducing
the input vector to a great degree (approximately 1/30th the original size) and
also able to reconstruct the input pattern at the output layer from this
reduced code units. The feature set (output of central hidden layer) extracted
from this network is fed to Forgy's algorithm, which classify input data
patterns into distinguishable classes. In the implementation of Forgy's
algorithm, initial seed points are selected in such a way that they are distant
enough to be perfectly grouped into different categories. Thus a new method of
unsupervised learning is formulated and demonstrated in this paper. This method
gave impressive results when applied to classification of different image
patterns.
| Dasika Ratna Deepthi, G.R.Aditya Krishna and K. Eswaran | null | 0712.0938 | null | null |
Reconstruction of Markov Random Fields from Samples: Some Easy
Observations and Algorithms | cs.CC cs.LG | Markov random fields are used to model high dimensional distributions in a
number of applied areas. Much recent interest has been devoted to the
reconstruction of the dependency structure from independent samples from the
Markov random fields. We analyze a simple algorithm for reconstructing the
underlying graph defining a Markov random field on $n$ nodes and maximum degree
$d$ given observations. We show that under mild non-degeneracy conditions it
reconstructs the generating graph with high probability using $\Theta(d
\epsilon^{-2}\delta^{-4} \log n)$ samples where $\epsilon,\delta$ depend on the
local interactions. For most local interaction $\eps,\delta$ are of order
$\exp(-O(d))$.
Our results are optimal as a function of $n$ up to a multiplicative constant
depending on $d$ and the strength of the local interactions. Our results seem
to be the first results for general models that guarantee that {\em the}
generating model is reconstructed. Furthermore, we provide explicit $O(n^{d+2}
\epsilon^{-2}\delta^{-4} \log n)$ running time bound. In cases where the
measure on the graph has correlation decay, the running time is $O(n^2 \log n)$
for all fixed $d$. We also discuss the effect of observing noisy samples and
show that as long as the noise level is low, our algorithm is effective. On the
other hand, we construct an example where large noise implies
non-identifiability even for generic noise and interactions. Finally, we
briefly show that in some simple cases, models with hidden nodes can also be
recovered.
| Guy Bresler, Elchanan Mossel, Allan Sly | null | 0712.1402 | null | null |
A New Theoretic Foundation for Cross-Layer Optimization | cs.NI cs.LG | Cross-layer optimization solutions have been proposed in recent years to
improve the performance of network users operating in a time-varying,
error-prone wireless environment. However, these solutions often rely on ad-hoc
optimization approaches, which ignore the different environmental dynamics
experienced at various layers by a user and violate the layered network
architecture of the protocol stack by requiring layers to provide access to
their internal protocol parameters to other layers. This paper presents a new
theoretic foundation for cross-layer optimization, which allows each layer to
make autonomous decisions individually, while maximizing the utility of the
wireless user by optimally determining what information needs to be exchanged
among layers. Hence, this cross-layer framework does not change the current
layered architecture. Specifically, because the wireless user interacts with
the environment at various layers of the protocol stack, the cross-layer
optimization problem is formulated as a layered Markov decision process (MDP)
in which each layer adapts its own protocol parameters and exchanges
information (messages) with other layers in order to cooperatively maximize the
performance of the wireless user. The message exchange mechanism for
determining the optimal cross-layer transmission strategies has been designed
for both off-line optimization and on-line dynamic adaptation. We also show
that many existing cross-layer optimization algorithms can be formulated as
simplified, sub-optimal, versions of our layered MDP framework.
| Fangwen Fu and Mihaela van der Schaar | null | 0712.2497 | null | null |
Density estimation in linear time | cs.LG | We consider the problem of choosing a density estimate from a set of
distributions F, minimizing the L1-distance to an unknown distribution
(Devroye, Lugosi 2001). Devroye and Lugosi analyze two algorithms for the
problem: Scheffe tournament winner and minimum distance estimate. The Scheffe
tournament estimate requires fewer computations than the minimum distance
estimate, but has strictly weaker guarantees than the latter.
We focus on the computational aspect of density estimation. We present two
algorithms, both with the same guarantee as the minimum distance estimate. The
first one, a modification of the minimum distance estimate, uses the same
number (quadratic in |F|) of computations as the Scheffe tournament. The second
one, called ``efficient minimum loss-weight estimate,'' uses only a linear
number of computations, assuming that F is preprocessed.
We also give examples showing that the guarantees of the algorithms cannot be
improved and explore randomized algorithms for density estimation.
| Satyaki Mahalanabis, Daniel Stefankovic | null | 0712.2869 | null | null |
Graph kernels between point clouds | cs.LG | Point clouds are sets of points in two or three dimensions. Most kernel
methods for learning on sets of points have not yet dealt with the specific
geometrical invariances and practical constraints associated with point clouds
in computer vision and graphics. In this paper, we present extensions of graph
kernels for point clouds, which allow to use kernel methods for such ob jects
as shapes, line drawings, or any three-dimensional point clouds. In order to
design rich and numerically efficient kernels with as few free parameters as
possible, we use kernels between covariance matrices and their factorizations
on graphical models. We derive polynomial time dynamic programming recursions
and present applications to recognition of handwritten digits and Chinese
characters from few training examples.
| Francis Bach (WILLOW Project - Inria/Ens) | null | 0712.3402 | null | null |
Improving the Performance of PieceWise Linear Separation Incremental
Algorithms for Practical Hardware Implementations | cs.NE cs.AI cs.LG | In this paper we shall review the common problems associated with Piecewise
Linear Separation incremental algorithms. This kind of neural models yield poor
performances when dealing with some classification problems, due to the
evolving schemes used to construct the resulting networks. So as to avoid this
undesirable behavior we shall propose a modification criterion. It is based
upon the definition of a function which will provide information about the
quality of the network growth process during the learning phase. This function
is evaluated periodically as the network structure evolves, and will permit, as
we shall show through exhaustive benchmarks, to considerably improve the
performance(measured in terms of network complexity and generalization
capabilities) offered by the networks generated by these incremental models.
| Alejandro Chinea Manrique De Lara, Juan Manuel Moreno, Arostegui Jordi
Madrenas, Joan Cabestany | null | 0712.3654 | null | null |
Improved Collaborative Filtering Algorithm via Information
Transformation | cs.LG cs.CY | In this paper, we propose a spreading activation approach for collaborative
filtering (SA-CF). By using the opinion spreading process, the similarity
between any users can be obtained. The algorithm has remarkably higher accuracy
than the standard collaborative filtering (CF) using Pearson correlation.
Furthermore, we introduce a free parameter $\beta$ to regulate the
contributions of objects to user-user correlations. The numerical results
indicate that decreasing the influence of popular objects can further improve
the algorithmic accuracy and personality. We argue that a better algorithm
should simultaneously require less computation and generate higher accuracy.
Accordingly, we further propose an algorithm involving only the top-$N$ similar
neighbors for each target user, which has both less computational complexity
and higher algorithmic accuracy.
| Jian-Guo Liu, Bing-Hong Wang, Qiang Guo | 10.1142/S0129183109013613 | 0712.3807 | null | null |
Online EM Algorithm for Latent Data Models | stat.CO cs.LG | In this contribution, we propose a generic online (also sometimes called
adaptive or recursive) version of the Expectation-Maximisation (EM) algorithm
applicable to latent variable models of independent observations. Compared to
the algorithm of Titterington (1984), this approach is more directly connected
to the usual EM algorithm and does not rely on integration with respect to the
complete data distribution. The resulting algorithm is usually simpler and is
shown to achieve convergence to the stationary points of the Kullback-Leibler
divergence between the marginal distribution of the observation and the model
distribution at the optimal rate, i.e., that of the maximum likelihood
estimator. In addition, the proposed approach is also suitable for conditional
(or regression) models, as illustrated in the case of the mixture of linear
regressions model.
| Olivier Capp\'e (LTCI), Eric Moulines (LTCI) | 10.1111/j.1467-9868.2009.00698.x | 0712.4273 | null | null |
Staring at Economic Aggregators through Information Lenses | cs.IT cs.LG math.IT math.OC | It is hard to exaggerate the role of economic aggregators -- functions that
summarize numerous and / or heterogeneous data -- in economic models since the
early XX$^{th}$ century. In many cases, as witnessed by the pioneering works of
Cobb and Douglas, these functions were information quantities tailored to
economic theories, i.e. they were built to fit economic phenomena. In this
paper, we look at these functions from the complementary side: information. We
use a recent toolbox built on top of a vast class of distortions coined by
Bregman, whose application field rivals metrics' in various subfields of
mathematics. This toolbox makes it possible to find the quality of an
aggregator (for consumptions, prices, labor, capital, wages, etc.), from the
standpoint of the information it carries. We prove a rather striking result.
From the informational standpoint, well-known economic aggregators do belong
to the \textit{optimal} set. As common economic assumptions enter the analysis,
this large set shrinks, and it essentially ends up \textit{exactly fitting}
either CES, or Cobb-Douglas, or both. To summarize, in the relevant economic
contexts, one could not have crafted better some aggregator from the
information standpoint. We also discuss global economic behaviors of optimal
information aggregators in general, and present a brief panorama of the links
between economic and information aggregators.
Keywords: Economic Aggregators, CES, Cobb-Douglas, Bregman divergences
| Richard Nock, Nicolas Sanz, Fred Celimene, Frank Nielsen | null | 0801.0390 | null | null |
Online variants of the cross-entropy method | cs.LG | The cross-entropy method is a simple but efficient method for global
optimization. In this paper we provide two online variants of the basic CEM,
together with a proof of convergence.
| Istvan Szita and Andras Lorincz | null | 0801.1988 | null | null |
Factored Value Iteration Converges | cs.AI cs.LG | In this paper we propose a novel algorithm, factored value iteration (FVI),
for the approximate solution of factored Markov decision processes (fMDPs). The
traditional approximate value iteration algorithm is modified in two ways. For
one, the least-squares projection operator is modified so that it does not
increase max-norm, and thus preserves convergence. The other modification is
that we uniformly sample polynomially many samples from the (exponentially
large) state space. This way, the complexity of our algorithm becomes
polynomial in the size of the fMDP description length. We prove that the
algorithm is convergent. We also derive an upper bound on the difference
between our approximate solution and the optimal one, and also on the error
introduced by sampling. We analyze various projection operators with respect to
their computation complexity and their convergence when combined with
approximate value iteration.
| Istvan Szita and Andras Lorincz | null | 0801.2069 | null | null |
The optimal assignment kernel is not positive definite | cs.LG | We prove that the optimal assignment kernel, proposed recently as an attempt
to embed labeled graphs and more generally tuples of basic data to a Hilbert
space, is in fact not always positive definite.
| Jean-Philippe Vert (CB) | null | 0801.4061 | null | null |
Information Width | cs.DM cs.IT cs.LG math.IT | Kolmogorov argued that the concept of information exists also in problems
with no underlying stochastic model (as Shannon's information representation)
for instance, the information contained in an algorithm or in the genome. He
introduced a combinatorial notion of entropy and information $I(x:\sy)$
conveyed by a binary string $x$ about the unknown value of a variable $\sy$.
The current paper poses the following questions: what is the relationship
between the information conveyed by $x$ about $\sy$ to the description
complexity of $x$ ? is there a notion of cost of information ? are there limits
on how efficient $x$ conveys information ?
To answer these questions Kolmogorov's definition is extended and a new
concept termed {\em information width} which is similar to $n$-widths in
approximation theory is introduced. Information of any input source, e.g.,
sample-based, general side-information or a hybrid of both can be evaluated by
a single common formula. An application to the space of binary functions is
considered.
| Joel Ratsaby | null | 0801.4790 | null | null |
On the Complexity of Binary Samples | cs.DM cs.AI cs.LG | Consider a class $\mH$ of binary functions $h: X\to\{-1, +1\}$ on a finite
interval $X=[0, B]\subset \Real$. Define the {\em sample width} of $h$ on a
finite subset (a sample) $S\subset X$ as $\w_S(h) \equiv \min_{x\in S}
|\w_h(x)|$, where $\w_h(x) = h(x) \max\{a\geq 0: h(z)=h(x), x-a\leq z\leq
x+a\}$. Let $\mathbb{S}_\ell$ be the space of all samples in $X$ of cardinality
$\ell$ and consider sets of wide samples, i.e., {\em hypersets} which are
defined as $A_{\beta, h} = \{S\in \mathbb{S}_\ell: \w_{S}(h) \geq \beta\}$.
Through an application of the Sauer-Shelah result on the density of sets an
upper estimate is obtained on the growth function (or trace) of the class
$\{A_{\beta, h}: h\in\mH\}$, $\beta>0$, i.e., on the number of possible
dichotomies obtained by intersecting all hypersets with a fixed collection of
samples $S\in\mathbb{S}_\ell$ of cardinality $m$. The estimate is
$2\sum_{i=0}^{2\lfloor B/(2\beta)\rfloor}{m-\ell\choose i}$.
| Joel Ratsaby | null | 0801.4794 | null | null |
New Estimation Procedures for PLS Path Modelling | cs.LG | Given R groups of numerical variables X1, ... XR, we assume that each group
is the result of one underlying latent variable, and that all latent variables
are bound together through a linear equation system. Moreover, we assume that
some explanatory latent variables may interact pairwise in one or more
equations. We basically consider PLS Path Modelling's algorithm to estimate
both latent variables and the model's coefficients. New "external" estimation
schemes are proposed that draw latent variables towards strong group structures
in a more flexible way. New "internal" estimation schemes are proposed to
enable PLSPM to make good use of variable group complementarity and to deal
with interactions. Application examples are given.
| Xavier Bry (I3M) | null | 0802.1002 | null | null |
Learning Balanced Mixtures of Discrete Distributions with Small Sample | cs.LG stat.ML | We study the problem of partitioning a small sample of $n$ individuals from a
mixture of $k$ product distributions over a Boolean cube $\{0, 1\}^K$ according
to their distributions. Each distribution is described by a vector of allele
frequencies in $\R^K$. Given two distributions, we use $\gamma$ to denote the
average $\ell_2^2$ distance in frequencies across $K$ dimensions, which
measures the statistical divergence between them. We study the case assuming
that bits are independently distributed across $K$ dimensions. This work
demonstrates that, for a balanced input instance for $k = 2$, a certain
graph-based optimization function returns the correct partition with high
probability, where a weighted graph $G$ is formed over $n$ individuals, whose
pairwise hamming distances between their corresponding bit vectors define the
edge weights, so long as $K = \Omega(\ln n/\gamma)$ and $Kn = \tilde\Omega(\ln
n/\gamma^2)$. The function computes a maximum-weight balanced cut of $G$, where
the weight of a cut is the sum of the weights across all edges in the cut. This
result demonstrates a nice property in the high-dimensional feature space: one
can trade off the number of features that are required with the size of the
sample to accomplish certain tasks like clustering.
| Shuheng Zhou | null | 0802.1244 | null | null |
Bayesian Nonlinear Principal Component Analysis Using Random Fields | cs.CV cs.LG | We propose a novel model for nonlinear dimension reduction motivated by the
probabilistic formulation of principal component analysis. Nonlinearity is
achieved by specifying different transformation matrices at different locations
of the latent space and smoothing the transformation using a Markov random
field type prior. The computation is made feasible by the recent advances in
sampling from von Mises-Fisher distributions.
| Heng Lian | null | 0802.1258 | null | null |
A New Approach to Collaborative Filtering: Operator Estimation with
Spectral Regularization | cs.LG | We present a general approach for collaborative filtering (CF) using spectral
regularization to learn linear operators from "users" to the "objects" they
rate. Recent low-rank type matrix completion approaches to CF are shown to be
special cases. However, unlike existing regularization based CF methods, our
approach can be used to also incorporate information such as attributes of the
users or the objects -- a limitation of existing regularization based CF
methods. We then provide novel representer theorems that we use to develop new
estimation methods. We provide learning algorithms based on low-rank
decompositions, and test them on a standard CF dataset. The experiments
indicate the advantages of generalizing the existing regularization based CF
methods to incorporate related information about users and objects. Finally, we
show that certain multi-task learning methods can be also seen as special cases
of our proposed approach.
| Jacob Abernethy, Francis Bach (INRIA Rocquencourt), Theodoros
Evgeniou, Jean-Philippe Vert (CB) | null | 0802.1430 | null | null |
Combining Expert Advice Efficiently | cs.LG cs.DS cs.IT math.IT | We show how models for prediction with expert advice can be defined concisely
and clearly using hidden Markov models (HMMs); standard HMM algorithms can then
be used to efficiently calculate, among other things, how the expert
predictions should be weighted according to the model. We cast many existing
models as HMMs and recover the best known running times in each case. We also
describe two new models: the switch distribution, which was recently developed
to improve Bayesian/Minimum Description Length model selection, and a new
generalisation of the fixed share algorithm based on run-length coding. We give
loss bounds for all models and shed new light on their relationships.
| Wouter Koolen and Steven de Rooij | null | 0802.2015 | null | null |
A Radar-Shaped Statistic for Testing and Visualizing Uniformity
Properties in Computer Experiments | cs.LG math.ST stat.TH | In the study of computer codes, filling space as uniformly as possible is
important to describe the complexity of the investigated phenomenon. However,
this property is not conserved by reducing the dimension. Some numeric
experiment designs are conceived in this sense as Latin hypercubes or
orthogonal arrays, but they consider only the projections onto the axes or the
coordinate planes. In this article we introduce a statistic which allows
studying the good distribution of points according to all 1-dimensional
projections. By angularly scanning the domain, we obtain a radar type
representation, allowing the uniformity defects of a design to be identified
with respect to its projections onto straight lines. The advantages of this new
tool are demonstrated on usual examples of space-filling designs (SFD) and a
global statistic independent of the angle of rotation is studied.
| Jessica Franco, Laurent Carraro, Olivier Roustant, Astrid Jourdan
(LMA-PAU) | null | 0802.2158 | null | null |
Compressed Counting | cs.IT cs.CC cs.DM cs.DS cs.LG math.IT | Counting is among the most fundamental operations in computing. For example,
counting the pth frequency moment has been a very active area of research, in
theoretical computer science, databases, and data mining. When p=1, the task
(i.e., counting the sum) can be accomplished using a simple counter.
Compressed Counting (CC) is proposed for efficiently computing the pth
frequency moment of a data stream signal A_t, where 0<p<=2. CC is applicable if
the streaming data follow the Turnstile model, with the restriction that at the
time t for the evaluation, A_t[i]>= 0, which includes the strict Turnstile
model as a special case. For natural data streams encountered in practice, this
restriction is minor.
The underly technique for CC is what we call skewed stable random
projections, which captures the intuition that, when p=1 a simple counter
suffices, and when p = 1+/\Delta with small \Delta, the sample complexity of a
counter system should be low (continuously as a function of \Delta). We show at
small \Delta the sample complexity (number of projections) k = O(1/\epsilon)
instead of O(1/\epsilon^2).
Compressed Counting can serve a basic building block for other tasks in
statistics and computing, for example, estimation entropies of data streams,
parameter estimations using the method of moments and maximum likelihood.
Finally, another contribution is an algorithm for approximating the
logarithmic norm, \sum_{i=1}^D\log A_t[i], and logarithmic distance. The
logarithmic distance is useful in machine learning practice with heavy-tailed
data.
| Ping Li | null | 0802.2305 | null | null |
Sign Language Tutoring Tool | cs.LG cs.HC | In this project, we have developed a sign language tutor that lets users
learn isolated signs by watching recorded videos and by trying the same signs.
The system records the user's video and analyses it. If the sign is recognized,
both verbal and animated feedback is given to the user. The system is able to
recognize complex signs that involve both hand gestures and head movements and
expressions. Our performance tests yield a 99% recognition rate on signs
involving only manual gestures and 85% recognition rate on signs that involve
both manual and non manual components, such as head movement and facial
expressions.
| Oya Aran, Ismail Ari, Alexandre Benoit (GIPSA-lab), Ana Huerta
Carrillo, Fran\c{c}ois-Xavier Fanard (TELE), Pavel Campr, Lale Akarun, Alice
Caplier (GIPSA-lab), Michele Rombaut (GIPSA-lab), Bulent Sankur | null | 0802.2428 | null | null |
Pure Exploration for Multi-Armed Bandit Problems | math.ST cs.LG stat.TH | We consider the framework of stochastic multi-armed bandit problems and study
the possibilities and limitations of forecasters that perform an on-line
exploration of the arms. These forecasters are assessed in terms of their
simple regret, a regret notion that captures the fact that exploration is only
constrained by the number of available rounds (not necessarily known in
advance), in contrast to the case when the cumulative regret is considered and
when exploitation needs to be performed at the same time. We believe that this
performance criterion is suited to situations when the cost of pulling an arm
is expressed in terms of resources rather than rewards. We discuss the links
between the simple and the cumulative regret. One of the main results in the
case of a finite number of arms is a general lower bound on the simple regret
of a forecaster in terms of its cumulative regret: the smaller the latter, the
larger the former. Keeping this result in mind, we then exhibit upper bounds on
the simple regret of some forecasters. The paper ends with a study devoted to
continuous-armed bandit problems; we show that the simple regret can be
minimized with respect to a family of probability distributions if and only if
the cumulative regret can be minimized for it. Based on this equivalence, we
are able to prove that the separable metric spaces are exactly the metric
spaces on which these regrets can be minimized with respect to the family of
all probability distributions with continuous mean-payoff functions.
| S\'ebastien Bubeck (INRIA Futurs), R\'emi Munos (INRIA Futurs), Gilles
Stoltz (DMA, GREGH) | null | 0802.2655 | null | null |
Knowledge Technologies | cs.CY cs.AI cs.LG cs.SE | Several technologies are emerging that provide new ways to capture, store,
present and use knowledge. This book is the first to provide a comprehensive
introduction to five of the most important of these technologies: Knowledge
Engineering, Knowledge Based Engineering, Knowledge Webs, Ontologies and
Semantic Webs. For each of these, answers are given to a number of key
questions (What is it? How does it operate? How is a system developed? What can
it be used for? What tools are available? What are the main issues?). The book
is aimed at students, researchers and practitioners interested in Knowledge
Management, Artificial Intelligence, Design Engineering and Web Technologies.
During the 1990s, Nick worked at the University of Nottingham on the
application of AI techniques to knowledge management and on various knowledge
acquisition projects to develop expert systems for military applications. In
1999, he joined Epistemics where he worked on numerous knowledge projects and
helped establish knowledge management programmes at large organisations in the
engineering, technology and legal sectors. He is author of the book "Knowledge
Acquisition in Practice", which describes a step-by-step procedure for
acquiring and implementing expertise. He maintains strong links with leading
research organisations working on knowledge technologies, such as
knowledge-based engineering, ontologies and semantic technologies.
| Nick Milton | null | 0802.3789 | null | null |
What Can We Learn Privately? | cs.LG cs.CC cs.CR cs.DB | Learning problems form an important category of computational tasks that
generalizes many of the computations researchers apply to large real-life data
sets. We ask: what concept classes can be learned privately, namely, by an
algorithm whose output does not depend too heavily on any one input or specific
training example? More precisely, we investigate learning algorithms that
satisfy differential privacy, a notion that provides strong confidentiality
guarantees in contexts where aggregate information is released about a database
containing sensitive information about individuals. We demonstrate that,
ignoring computational constraints, it is possible to privately agnostically
learn any concept class using a sample size approximately logarithmic in the
cardinality of the concept class. Therefore, almost anything learnable is
learnable privately: specifically, if a concept class is learnable by a
(non-private) algorithm with polynomial sample complexity and output size, then
it can be learned privately using a polynomial number of samples. We also
present a computationally efficient private PAC learner for the class of parity
functions. Local (or randomized response) algorithms are a practical class of
private algorithms that have received extensive investigation. We provide a
precise characterization of local private learning algorithms. We show that a
concept class is learnable by a local algorithm if and only if it is learnable
in the statistical query (SQ) model. Finally, we present a separation between
the power of interactive and noninteractive local learning algorithms.
| Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya
Raskhodnikova, and Adam Smith | null | 0803.0924 | null | null |
Privacy Preserving ID3 over Horizontally, Vertically and Grid
Partitioned Data | cs.DB cs.LG | We consider privacy preserving decision tree induction via ID3 in the case
where the training data is horizontally or vertically distributed. Furthermore,
we consider the same problem in the case where the data is both horizontally
and vertically distributed, a situation we refer to as grid partitioned data.
We give an algorithm for privacy preserving ID3 over horizontally partitioned
data involving more than two parties. For grid partitioned data, we discuss two
different evaluation methods for preserving privacy ID3, namely, first merging
horizontally and developing vertically or first merging vertically and next
developing horizontally. Next to introducing privacy preserving data mining
over grid-partitioned data, the main contribution of this paper is that we
show, by means of a complexity analysis that the former evaluation method is
the more efficient.
| Bart Kuijpers, Vanessa Lemmens, Bart Moelans and Karl Tuyls | null | 0803.1555 | null | null |
Figuring out Actors in Text Streams: Using Collocations to establish
Incremental Mind-maps | cs.CL cs.LG | The recognition, involvement, and description of main actors influences the
story line of the whole text. This is of higher importance as the text per se
represents a flow of words and expressions that once it is read it is lost. In
this respect, the understanding of a text and moreover on how the actor exactly
behaves is not only a major concern: as human beings try to store a given input
on short-term memory while associating diverse aspects and actors with
incidents, the following approach represents a virtual architecture, where
collocations are concerned and taken as the associative completion of the
actors' acting. Once that collocations are discovered, they become managed in
separated memory blocks broken down by the actors. As for human beings, the
memory blocks refer to associative mind-maps. We then present several priority
functions to represent the actual temporal situation inside a mind-map to
enable the user to reconstruct the recent events from the discovered temporal
results.
| T. Rothenberger, S. Oez, E. Tahirovic, C. Schommer | null | 0803.2856 | null | null |
Robustness and Regularization of Support Vector Machines | cs.LG cs.AI | We consider regularized support vector machines (SVMs) and show that they are
precisely equivalent to a new robust optimization formulation. We show that
this equivalence of robust optimization and regularization has implications for
both algorithms, and analysis. In terms of algorithms, the equivalence suggests
more general SVM-like algorithms for classification that explicitly build in
protection to noise, and at the same time control overfitting. On the analysis
front, the equivalence of robustness and regularization, provides a robust
optimization interpretation for the success of regularized SVMs. We use the
this new robustness interpretation of SVMs to give a new proof of consistency
of (kernelized) SVMs, thus establishing robustness as the reason regularized
SVMs generalize well.
| Huan Xu, Constantine Caramanis and Shie Mannor | null | 0803.3490 | null | null |
Recorded Step Directional Mutation for Faster Convergence | cs.NE cs.LG | Two meta-evolutionary optimization strategies described in this paper
accelerate the convergence of evolutionary programming algorithms while still
retaining much of their ability to deal with multi-modal problems. The
strategies, called directional mutation and recorded step in this paper, can
operate independently but together they greatly enhance the ability of
evolutionary programming algorithms to deal with fitness landscapes
characterized by long narrow valleys. The directional mutation aspect of this
combined method uses correlated meta-mutation but does not introduce a full
covariance matrix. These new methods are thus much more economical in terms of
storage for problems with high dimensionality. Additionally, directional
mutation is rotationally invariant which is a substantial advantage over
self-adaptive methods which use a single variance per coordinate for problems
where the natural orientation of the problem is not oriented along the axes.
| Ted Dunning | null | 0803.3838 | null | null |
Support Vector Machine Classification with Indefinite Kernels | cs.LG cs.AI | We propose a method for support vector machine classification using
indefinite kernels. Instead of directly minimizing or stabilizing a nonconvex
loss function, our algorithm simultaneously computes support vectors and a
proxy kernel matrix used in forming the loss. This can be interpreted as a
penalized kernel learning problem where indefinite kernel matrices are treated
as a noisy observations of a true Mercer kernel. Our formulation keeps the
problem convex and relatively large problems can be solved efficiently using
the projected gradient or analytic center cutting plane methods. We compare the
performance of our technique with other methods on several classic data sets.
| Ronny Luss, Alexandre d'Aspremont | null | 0804.0188 | null | null |
A Unified Semi-Supervised Dimensionality Reduction Framework for
Manifold Learning | cs.LG cs.AI | We present a general framework of semi-supervised dimensionality reduction
for manifold learning which naturally generalizes existing supervised and
unsupervised learning frameworks which apply the spectral decomposition.
Algorithms derived under our framework are able to employ both labeled and
unlabeled examples and are able to handle complex problems where data form
separate clusters of manifolds. Our framework offers simple views, explains
relationships among existing frameworks and provides further extensions which
can improve existing algorithms. Furthermore, a new semi-supervised
kernelization framework called ``KPCA trick'' is proposed to handle non-linear
problems.
| Ratthachat Chatpatanasiri and Boonserm Kijsirikul | null | 0804.0924 | null | null |
Bolasso: model consistent Lasso estimation through the bootstrap | cs.LG math.ST stat.ML stat.TH | We consider the least-square linear regression problem with regularization by
the l1-norm, a problem usually referred to as the Lasso. In this paper, we
present a detailed asymptotic analysis of model consistency of the Lasso. For
various decays of the regularization parameter, we compute asymptotic
equivalents of the probability of correct model selection (i.e., variable
selection). For a specific rate decay, we show that the Lasso selects all the
variables that should enter the model with probability tending to one
exponentially fast, while it selects all other variables with strictly positive
probability. We show that this property implies that if we run the Lasso for
several bootstrapped replications of a given sample, then intersecting the
supports of the Lasso bootstrap estimates leads to consistent model selection.
This novel variable selection algorithm, referred to as the Bolasso, is
compared favorably to other linear regression methods on synthetic data and
datasets from the UCI machine learning repository.
| Francis Bach (INRIA Rocquencourt) | null | 0804.1302 | null | null |
On Kernelization of Supervised Mahalanobis Distance Learners | cs.LG cs.AI | This paper focuses on the problem of kernelizing an existing supervised
Mahalanobis distance learner. The following features are included in the paper.
Firstly, three popular learners, namely, "neighborhood component analysis",
"large margin nearest neighbors" and "discriminant neighborhood embedding",
which do not have kernel versions are kernelized in order to improve their
classification performances. Secondly, an alternative kernelization framework
called "KPCA trick" is presented. Implementing a learner in the new framework
gains several advantages over the standard framework, e.g. no mathematical
formulas and no reprogramming are required for a kernel implementation, the
framework avoids troublesome problems such as singularity, etc. Thirdly, while
the truths of representer theorems are just assumptions in previous papers
related to ours, here, representer theorems are formally proven. The proofs
validate both the kernel trick and the KPCA trick in the context of Mahalanobis
distance learning. Fourthly, unlike previous works which always apply brute
force methods to select a kernel, we investigate two approaches which can be
efficiently adopted to construct an appropriate kernel for a given dataset.
Finally, numerical results on various real-world datasets are presented.
| Ratthachat Chatpatanasiri, Teesid Korsrilabutr, Pasakorn
Tangchanachaianan and Boonserm Kijsirikul | null | 0804.1441 | null | null |
Isotropic PCA and Affine-Invariant Clustering | cs.LG cs.CG | We present a new algorithm for clustering points in R^n. The key property of
the algorithm is that it is affine-invariant, i.e., it produces the same
partition for any affine transformation of the input. It has strong guarantees
when the input is drawn from a mixture model. For a mixture of two arbitrary
Gaussians, the algorithm correctly classifies the sample assuming only that the
two components are separable by a hyperplane, i.e., there exists a halfspace
that contains most of one Gaussian and almost none of the other in probability
mass. This is nearly the best possible, improving known results substantially.
For k > 2 components, the algorithm requires only that there be some
(k-1)-dimensional subspace in which the emoverlap in every direction is small.
Here we define overlap to be the ratio of the following two quantities: 1) the
average squared distance between a point and the mean of its component, and 2)
the average squared distance between a point and the mean of the mixture. The
main result may also be stated in the language of linear discriminant analysis:
if the standard Fisher discriminant is small enough, labels are not needed to
estimate the optimal subspace for projection. Our main tools are isotropic
transformation, spectral projection and a simple reweighting technique. We call
this combination isotropic PCA.
| S. Charles Brubaker and Santosh S. Vempala | null | 0804.3575 | null | null |
Multiple Random Oracles Are Better Than One | cs.LG | We study the problem of learning k-juntas given access to examples drawn from
a number of different product distributions. Thus we wish to learn a function f
: {-1,1}^n -> {-1,1} that depends on k (unknown) coordinates. While the best
known algorithms for the general problem of learning a k-junta require running
time of n^k * poly(n,2^k), we show that given access to k different product
distributions with biases separated by \gamma>0, the functions may be learned
in time poly(n,2^k,\gamma^{-k}). More generally, given access to t <= k
different product distributions, the functions may be learned in time n^{k/t} *
poly(n,2^k,\gamma^{-k}). Our techniques involve novel results in Fourier
analysis relating Fourier expansions with respect to different biases and a
generalization of Russo's formula.
| Jan Arpe and Elchanan Mossel | null | 0804.3817 | null | null |
Dependence Structure Estimation via Copula | cs.LG cs.IR stat.ME | Dependence strucuture estimation is one of the important problems in machine
learning domain and has many applications in different scientific areas. In
this paper, a theoretical framework for such estimation based on copula and
copula entropy -- the probabilistic theory of representation and measurement of
statistical dependence, is proposed. Graphical models are considered as a
special case of the copula framework. A method of the framework for estimating
maximum spanning copula is proposed. Due to copula, the method is irrelevant to
the properties of individual variables, insensitive to outlier and able to deal
with non-Gaussianity. Experiments on both simulated data and real dataset
demonstrated the effectiveness of the proposed method.
| Jian Ma and Zengqi Sun | null | 0804.4451 | null | null |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 38