categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG cs.AI cs.IT math.IT | null | 0810.5636 | null | null | http://arxiv.org/pdf/0810.5636v1 | 2008-10-31T07:58:31Z | 2008-10-31T07:58:31Z | On the Possibility of Learning in Reactive Environments with Arbitrary
Dependence | We address the problem of reinforcement learning in which observations may
exhibit an arbitrary form of stochastic dependence on past observations and
actions, i.e. environments more general than (PO)MDPs. The task for an agent is
to attain the best possible asymptotic reward where the true generating
environment is unknown but belongs to a known countable family of environments.
We find some sufficient conditions on the class of environments under which an
agent exists which attains the best asymptotic reward for any environment in
the class. We analyze how tight these conditions are and how they relate to
different probabilistic assumptions known in reinforcement learning and related
fields, such as Markov Decision Processes and mixing conditions.
| [
"['Daniil Ryabko' 'Marcus Hutter']",
"Daniil Ryabko and Marcus Hutter"
] |
cs.LG | null | 0811.0139 | null | null | http://arxiv.org/pdf/0811.0139v1 | 2008-11-02T08:02:43Z | 2008-11-02T08:02:43Z | Entropy, Perception, and Relativity | In this paper, I expand Shannon's definition of entropy into a new form of
entropy that allows integration of information from different random events.
Shannon's notion of entropy is a special case of my more general definition of
entropy. I define probability using a so-called performance function, which is
de facto an exponential distribution. Assuming that my general notion of
entropy reflects the true uncertainty about a probabilistic event, I understand
that our perceived uncertainty differs. I claim that our perception is the
result of two opposing forces similar to the two famous antagonists in Chinese
philosophy: Yin and Yang. Based on this idea, I show that our perceived
uncertainty matches the true uncertainty in points determined by the golden
ratio. I demonstrate that the well-known sigmoid function, which we typically
employ in artificial neural networks as a non-linear threshold function,
describes the actual performance. Furthermore, I provide a motivation for the
time dilation in Einstein's Special Relativity, basically claiming that
although time dilation conforms with our perception, it does not correspond to
reality. At the end of the paper, I show how to apply this theoretical
framework to practical applications. I present recognition rates for a pattern
recognition problem, and also propose a network architecture that can take
advantage of general entropy to solve complex decision problems.
| [
"Stefan Jaeger",
"['Stefan Jaeger']"
] |
cs.LG cs.AI stat.ML | 10.3758/BRM.41.4.1201 | 0811.0146 | null | null | http://arxiv.org/abs/0811.0146v3 | 2009-05-14T12:51:44Z | 2008-11-02T09:21:40Z | Effect of Tuned Parameters on a LSA MCQ Answering Model | This paper presents the current state of a work in progress, whose objective
is to better understand the effects of factors that significantly influence the
performance of Latent Semantic Analysis (LSA). A difficult task, which consists
in answering (French) biology Multiple Choice Questions, is used to test the
semantic properties of the truncated singular space and to study the relative
influence of main parameters. A dedicated software has been designed to fine
tune the LSA semantic space for the Multiple Choice Questions task. With
optimal parameters, the performances of our simple model are quite surprisingly
equal or superior to those of 7th and 8th grades students. This indicates that
semantic spaces were quite good despite their low dimensions and the small
sizes of training data sets. Besides, we present an original entropy global
weighting of answers' terms of each question of the Multiple Choice Questions
which was necessary to achieve the model's success.
| [
"['Alain Lifchitz' 'Sandra Jhean-Larose' 'Guy Denhière']",
"Alain Lifchitz (LIP6), Sandra Jhean-Larose (LPC), Guy Denhi\\`ere (LPC)"
] |
cs.LG cs.IR | null | 0811.1250 | null | null | http://arxiv.org/pdf/0811.1250v1 | 2008-11-08T23:23:08Z | 2008-11-08T23:23:08Z | Adaptive Base Class Boost for Multi-class Classification | We develop the concept of ABC-Boost (Adaptive Base Class Boost) for
multi-class classification and present ABC-MART, a concrete implementation of
ABC-Boost. The original MART (Multiple Additive Regression Trees) algorithm has
been very successful in large-scale applications. For binary classification,
ABC-MART recovers MART. For multi-class classification, ABC-MART considerably
improves MART, as evaluated on several public data sets.
| [
"Ping Li",
"['Ping Li']"
] |
cs.LG | null | 0811.1629 | null | null | http://arxiv.org/pdf/0811.1629v1 | 2008-11-11T05:09:08Z | 2008-11-11T05:09:08Z | Stability Bound for Stationary Phi-mixing and Beta-mixing Processes | Most generalization bounds in learning theory are based on some measure of
the complexity of the hypothesis class used, independently of any algorithm. In
contrast, the notion of algorithmic stability can be used to derive tight
generalization bounds that are tailored to specific learning algorithms by
exploiting their particular properties. However, as in much of learning theory,
existing stability analyses and bounds apply only in the scenario where the
samples are independently and identically distributed. In many machine learning
applications, however, this assumption does not hold. The observations received
by the learning algorithm often have some inherent temporal dependence.
This paper studies the scenario where the observations are drawn from a
stationary phi-mixing or beta-mixing sequence, a widely adopted assumption in
the study of non-i.i.d. processes that implies a dependence between
observations weakening over time. We prove novel and distinct stability-based
generalization bounds for stationary phi-mixing and beta-mixing sequences.
These bounds strictly generalize the bounds given in the i.i.d. case and apply
to all stable learning algorithms, thereby extending the use of
stability-bounds to non-i.i.d. scenarios.
We also illustrate the application of our phi-mixing generalization bounds to
general classes of learning algorithms, including Support Vector Regression,
Kernel Ridge Regression, and Support Vector Machines, and many other kernel
regularization-based and relative entropy-based regularization algorithms.
These novel bounds can thus be viewed as the first theoretical basis for the
use of these algorithms in non-i.i.d. scenarios.
| [
"Mehryar Mohri and Afshin Rostamizadeh",
"['Mehryar Mohri' 'Afshin Rostamizadeh']"
] |
cs.IT cs.LG math.IT | null | 0811.1790 | null | null | http://arxiv.org/pdf/0811.1790v1 | 2008-11-11T22:46:10Z | 2008-11-11T22:46:10Z | Robust Regression and Lasso | Lasso, or $\ell^1$ regularized least squares, has been explored extensively
for its remarkable sparsity properties. It is shown in this paper that the
solution to Lasso, in addition to its sparsity, has robustness properties: it
is the solution to a robust optimization problem. This has two important
consequences. First, robustness provides a connection of the regularizer to a
physical property, namely, protection from noise. This allows a principled
selection of the regularizer, and in particular, generalizations of Lasso that
also yield convex optimization problems are obtained by considering different
uncertainty sets.
Secondly, robustness can itself be used as an avenue to exploring different
properties of the solution. In particular, it is shown that robustness of the
solution explains why the solution is sparse. The analysis as well as the
specific results obtained differ from standard sparsity results, providing
different geometric intuition. Furthermore, it is shown that the robust
optimization formulation is related to kernel density estimation, and based on
this approach, a proof that Lasso is consistent is given using robustness
directly. Finally, a theorem saying that sparsity and algorithmic stability
contradict each other, and hence Lasso is not stable, is presented.
| [
"Huan Xu, Constantine Caramanis and Shie Mannor",
"['Huan Xu' 'Constantine Caramanis' 'Shie Mannor']"
] |
cs.LG | null | 0811.2016 | null | null | http://arxiv.org/pdf/0811.2016v1 | 2008-11-13T01:23:47Z | 2008-11-13T01:23:47Z | Land Cover Mapping Using Ensemble Feature Selection Methods | Ensemble classification is an emerging approach to land cover mapping whereby
the final classification output is a result of a consensus of classifiers.
Intuitively, an ensemble system should consist of base classifiers which are
diverse i.e. classifiers whose decision boundaries err differently. In this
paper ensemble feature selection is used to impose diversity in ensembles. The
features of the constituent base classifiers for each ensemble were created
through an exhaustive search algorithm using different separability indices.
For each ensemble, the classification accuracy was derived as well as a
diversity measure purported to give a measure of the inensemble diversity. The
correlation between ensemble classification accuracy and diversity measure was
determined to establish the interplay between the two variables. From the
findings of this paper, diversity measures as currently formulated do not
provide an adequate means upon which to constitute ensembles for land cover
mapping.
| [
"A. Gidudu, B. Abe and T. Marwala",
"['A. Gidudu' 'B. Abe' 'T. Marwala']"
] |
cs.LG cs.AI | null | 0811.4413 | null | null | http://arxiv.org/pdf/0811.4413v6 | 2012-07-06T23:29:02Z | 2008-11-26T20:22:51Z | A Spectral Algorithm for Learning Hidden Markov Models | Hidden Markov Models (HMMs) are one of the most fundamental and widely used
statistical tools for modeling discrete time series. In general, learning HMMs
from data is computationally hard (under cryptographic assumptions), and
practitioners typically resort to search heuristics which suffer from the usual
local optima issues. We prove that under a natural separation condition (bounds
on the smallest singular value of the HMM parameters), there is an efficient
and provably correct algorithm for learning HMMs. The sample complexity of the
algorithm does not explicitly depend on the number of distinct (discrete)
observations---it implicitly depends on this quantity through spectral
properties of the underlying HMM. This makes the algorithm particularly
applicable to settings with a large number of observations, such as those in
natural language processing where the space of observation is sometimes the
words in a language. The algorithm is also simple, employing only a singular
value decomposition and matrix multiplications.
| [
"['Daniel Hsu' 'Sham M. Kakade' 'Tong Zhang']",
"Daniel Hsu, Sham M. Kakade, Tong Zhang"
] |
cs.LG cs.AI | null | 0811.4458 | null | null | http://arxiv.org/pdf/0811.4458v2 | 2009-10-20T18:58:20Z | 2008-11-27T01:02:33Z | Learning Class-Level Bayes Nets for Relational Data | Many databases store data in relational format, with different types of
entities and information about links between the entities. The field of
statistical-relational learning (SRL) has developed a number of new statistical
models for such data. In this paper we focus on learning class-level or
first-order dependencies, which model the general database statistics over
attributes of linked objects and links (e.g., the percentage of A grades given
in computer science classes). Class-level statistical relationships are
important in themselves, and they support applications like policy making,
strategic planning, and query optimization. Most current SRL methods find
class-level dependencies, but their main task is to support instance-level
predictions about the attributes or links of specific entities. We focus only
on class-level prediction, and describe algorithms for learning class-level
models that are orders of magnitude faster for this task. Our algorithms learn
Bayes nets with relational structure, leveraging the efficiency of single-table
nonrelational Bayes net learners. An evaluation of our methods on three data
sets shows that they are computationally feasible for realistic table sizes,
and that the learned structures represent the statistical information in the
databases well. After learning compiles the database statistics into a Bayes
net, querying these statistics via Bayes net inference is faster than with SQL
queries, and does not depend on the size of the database.
| [
"Oliver Schulte, Hassan Khosravi, Flavia Moser, Martin Ester",
"['Oliver Schulte' 'Hassan Khosravi' 'Flavia Moser' 'Martin Ester']"
] |
cs.CG cs.DS cs.LG | null | 0812.0382 | null | null | http://arxiv.org/pdf/0812.0382v1 | 2008-12-01T22:55:39Z | 2008-12-01T22:55:39Z | k-means requires exponentially many iterations even in the plane | The k-means algorithm is a well-known method for partitioning n points that
lie in the d-dimensional space into k clusters. Its main features are
simplicity and speed in practice. Theoretically, however, the best known upper
bound on its running time (i.e. O(n^{kd})) can be exponential in the number of
points. Recently, Arthur and Vassilvitskii [3] showed a super-polynomial
worst-case analysis, improving the best known lower bound from \Omega(n) to
2^{\Omega(\sqrt{n})} with a construction in d=\Omega(\sqrt{n}) dimensions. In
[3] they also conjectured the existence of superpolynomial lower bounds for any
d >= 2.
Our contribution is twofold: we prove this conjecture and we improve the
lower bound, by presenting a simple construction in the plane that leads to the
exponential lower bound 2^{\Omega(n)}.
| [
"['Andrea Vattani']",
"Andrea Vattani"
] |
cs.DS cs.LG | null | 0812.0389 | null | null | http://arxiv.org/pdf/0812.0389v4 | 2009-11-09T15:50:32Z | 2008-12-01T23:17:35Z | Approximation Algorithms for Bregman Co-clustering and Tensor Clustering | In the past few years powerful generalizations to the Euclidean k-means
problem have been made, such as Bregman clustering [7], co-clustering (i.e.,
simultaneous clustering of rows and columns of an input matrix) [9,18], and
tensor clustering [8,34]. Like k-means, these more general problems also suffer
from the NP-hardness of the associated optimization. Researchers have developed
approximation algorithms of varying degrees of sophistication for k-means,
k-medians, and more recently also for Bregman clustering [2]. However, there
seem to be no approximation algorithms for Bregman co- and tensor clustering.
In this paper we derive the first (to our knowledge) guaranteed methods for
these increasingly important clustering settings. Going beyond Bregman
divergences, we also prove an approximation factor for tensor clustering with
arbitrary separable metrics. Through extensive experiments we evaluate the
characteristics of our method, and show that it also has practical impact.
| [
"Stefanie Jegelka, Suvrit Sra, Arindam Banerjee",
"['Stefanie Jegelka' 'Suvrit Sra' 'Arindam Banerjee']"
] |
cs.LG cs.AI cs.CV cs.GT cs.MA cs.NE quant-ph | 10.1088/1751-8113/42/44/445303 | 0812.0743 | null | null | http://arxiv.org/abs/0812.0743v2 | 2009-10-10T09:10:36Z | 2008-12-03T15:46:03Z | A Novel Clustering Algorithm Based on Quantum Games | Enormous successes have been made by quantum algorithms during the last
decade. In this paper, we combine the quantum game with the problem of data
clustering, and then develop a quantum-game-based clustering algorithm, in
which data points in a dataset are considered as players who can make decisions
and implement quantum strategies in quantum games. After each round of a
quantum game, each player's expected payoff is calculated. Later, he uses a
link-removing-and-rewiring (LRR) function to change his neighbors and adjust
the strength of links connecting to them in order to maximize his payoff.
Further, algorithms are discussed and analyzed in two cases of strategies, two
payoff matrixes and two LRR functions. Consequently, the simulation results
have demonstrated that data points in datasets are clustered reasonably and
efficiently, and the clustering algorithms have fast rates of convergence.
Moreover, the comparison with other algorithms also provides an indication of
the effectiveness of the proposed approach.
| [
"['Qiang Li' 'Yan He' 'Jing-ping Jiang']",
"Qiang Li, Yan He, Jing-ping Jiang"
] |
cs.LG cs.CC | null | 0812.0933 | null | null | http://arxiv.org/pdf/0812.0933v1 | 2008-12-04T13:34:26Z | 2008-12-04T13:34:26Z | Decision trees are PAC-learnable from most product distributions: a
smoothed analysis | We consider the problem of PAC-learning decision trees, i.e., learning a
decision tree over the n-dimensional hypercube from independent random labeled
examples. Despite significant effort, no polynomial-time algorithm is known for
learning polynomial-sized decision trees (even trees of any super-constant
size), even when examples are assumed to be drawn from the uniform distribution
on {0,1}^n. We give an algorithm that learns arbitrary polynomial-sized
decision trees for {\em most product distributions}. In particular, consider a
random product distribution where the bias of each bit is chosen independently
and uniformly from, say, [.49,.51]. Then with high probability over the
parameters of the product distribution and the random examples drawn from it,
the algorithm will learn any tree. More generally, in the spirit of smoothed
analysis, we consider an arbitrary product distribution whose parameters are
specified only up to a [-c,c] accuracy (perturbation), for an arbitrarily small
positive constant c.
| [
"['Adam Tauman Kalai' 'Shang-Hua Teng']",
"Adam Tauman Kalai and Shang-Hua Teng"
] |
cs.IR cs.LG | 10.1186/gb-2008-9-s2-s11 | 0812.1029 | null | null | http://arxiv.org/abs/0812.1029v1 | 2008-12-04T21:37:35Z | 2008-12-04T21:37:35Z | Uncovering protein interaction in abstracts and text using a novel
linear model and word proximity networks | We participated in three of the protein-protein interaction subtasks of the
Second BioCreative Challenge: classification of abstracts relevant for
protein-protein interaction (IAS), discovery of protein pairs (IPS) and text
passages characterizing protein interaction (ISS) in full text documents. We
approached the abstract classification task with a novel, lightweight linear
model inspired by spam-detection techniques, as well as an uncertainty-based
integration scheme. We also used a Support Vector Machine and the Singular
Value Decomposition on the same features for comparison purposes. Our approach
to the full text subtasks (protein pair and passage identification) includes a
feature expansion method based on word-proximity networks. Our approach to the
abstract classification task (IAS) was among the top submissions for this task
in terms of the measures of performance used in the challenge evaluation
(accuracy, F-score and AUC). We also report on a web-tool we produced using our
approach: the Protein Interaction Abstract Relevance Evaluator (PIARE). Our
approach to the full text tasks resulted in one of the highest recall rates as
well as mean reciprocal rank of correct passages. Our approach to abstract
classification shows that a simple linear model, using relatively few features,
is capable of generalizing and uncovering the conceptual nature of
protein-protein interaction from the bibliome. Since the novel approach is
based on a very lightweight linear model, it can be easily ported and applied
to similar problems. In full text problems, the expansion of word features with
word-proximity networks is shown to be useful, though the need for some
improvements is discussed.
| [
"['Alaa Abi-Haidar' 'Jasleen Kaur' 'Ana G. Maguitman' 'Predrag Radivojac'\n 'Andreas Retchsteiner' 'Karin Verspoor' 'Zhiping Wang' 'Luis M. Rocha']",
"Alaa Abi-Haidar, Jasleen Kaur, Ana G. Maguitman, Predrag Radivojac,\n Andreas Retchsteiner, Karin Verspoor, Zhiping Wang, Luis M. Rocha"
] |
cs.MM cs.LG | null | 0812.1244 | null | null | http://arxiv.org/pdf/0812.1244v1 | 2008-12-05T23:14:41Z | 2008-12-05T23:14:41Z | Decomposition Principles and Online Learning in Cross-Layer Optimization
for Delay-Sensitive Applications | In this paper, we propose a general cross-layer optimization framework in
which we explicitly consider both the heterogeneous and dynamically changing
characteristics of delay-sensitive applications and the underlying time-varying
network conditions. We consider both the independently decodable data units
(DUs, e.g. packets) and the interdependent DUs whose dependencies are captured
by a directed acyclic graph (DAG). We first formulate the cross-layer design as
a non-linear constrained optimization problem by assuming complete knowledge of
the application characteristics and the underlying network conditions. The
constrained cross-layer optimization is decomposed into several cross-layer
optimization subproblems for each DU and two master problems. The proposed
decomposition method determines the necessary message exchanges between layers
for achieving the optimal cross-layer solution. However, the attributes (e.g.
distortion impact, delay deadline etc) of future DUs as well as the network
conditions are often unknown in the considered real-time applications. The
impact of current cross-layer actions on the future DUs can be characterized by
a state-value function in the Markov decision process (MDP) framework. Based on
the dynamic programming solution to the MDP, we develop a low-complexity
cross-layer optimization algorithm using online learning for each DU
transmission. This online algorithm can be implemented in real-time in order to
cope with unknown source characteristics, network dynamics and resource
constraints. Our numerical results demonstrate the efficiency of the proposed
online algorithm.
| [
"Fangwen Fu, Mihaela van der Schaar",
"['Fangwen Fu' 'Mihaela van der Schaar']"
] |
cs.LG | null | 0812.1357 | null | null | http://arxiv.org/pdf/0812.1357v1 | 2008-12-07T15:22:27Z | 2008-12-07T15:22:27Z | A Novel Clustering Algorithm Based on Quantum Random Walk | The enormous successes have been made by quantum algorithms during the last
decade. In this paper, we combine the quantum random walk (QRW) with the
problem of data clustering, and develop two clustering algorithms based on the
one dimensional QRW. Then, the probability distributions on the positions
induced by QRW in these algorithms are investigated, which also indicates the
possibility of obtaining better results. Consequently, the experimental results
have demonstrated that data points in datasets are clustered reasonably and
efficiently, and the clustering algorithms are of fast rates of convergence.
Moreover, the comparison with other algorithms also provides an indication of
the effectiveness of the proposed approach.
| [
"['Qiang Li' 'Yan He' 'Jing-ping Jiang']",
"Qiang Li, Yan He, Jing-ping Jiang"
] |
cs.LG | null | 0812.1869 | null | null | http://arxiv.org/pdf/0812.1869v1 | 2008-12-10T09:00:40Z | 2008-12-10T09:00:40Z | Convex Sparse Matrix Factorizations | We present a convex formulation of dictionary learning for sparse signal
decomposition. Convexity is obtained by replacing the usual explicit upper
bound on the dictionary size by a convex rank-reducing term similar to the
trace norm. In particular, our formulation introduces an explicit trade-off
between size and sparsity of the decomposition of rectangular matrices. Using a
large set of synthetic examples, we compare the estimation abilities of the
convex and non-convex approaches, showing that while the convex formulation has
a single local minimum, this may lead in some cases to performance which is
inferior to the local minima of the non-convex formulation.
| [
"['Francis Bach' 'Julien Mairal' 'Jean Ponce']",
"Francis Bach (INRIA Rocquencourt), Julien Mairal (INRIA Rocquencourt),\n Jean Ponce (INRIA Rocquencourt)"
] |
cs.DS cs.GT cs.LG | null | 0812.2291 | null | null | http://arxiv.org/pdf/0812.2291v7 | 2013-06-03T21:03:36Z | 2008-12-12T04:13:01Z | Characterizing Truthful Multi-Armed Bandit Mechanisms | We consider a multi-round auction setting motivated by pay-per-click auctions
for Internet advertising. In each round the auctioneer selects an advertiser
and shows her ad, which is then either clicked or not. An advertiser derives
value from clicks; the value of a click is her private information. Initially,
neither the auctioneer nor the advertisers have any information about the
likelihood of clicks on the advertisements. The auctioneer's goal is to design
a (dominant strategies) truthful mechanism that (approximately) maximizes the
social welfare.
If the advertisers bid their true private values, our problem is equivalent
to the "multi-armed bandit problem", and thus can be viewed as a strategic
version of the latter. In particular, for both problems the quality of an
algorithm can be characterized by "regret", the difference in social welfare
between the algorithm and the benchmark which always selects the same "best"
advertisement. We investigate how the design of multi-armed bandit algorithms
is affected by the restriction that the resulting mechanism must be truthful.
We find that truthful mechanisms have certain strong structural properties --
essentially, they must separate exploration from exploitation -- and they incur
much higher regret than the optimal multi-armed bandit algorithms. Moreover, we
provide a truthful mechanism which (essentially) matches our lower bound on
regret.
| [
"Moshe Babaioff, Yogeshwer Sharma, Aleksandrs Slivkins",
"['Moshe Babaioff' 'Yogeshwer Sharma' 'Aleksandrs Slivkins']"
] |
cs.CV cs.LG | null | 0812.2574 | null | null | http://arxiv.org/pdf/0812.2574v1 | 2008-12-13T19:09:03Z | 2008-12-13T19:09:03Z | Feature Selection By KDDA For SVM-Based MultiView Face Recognition | Applications such as face recognition that deal with high-dimensional data
need a mapping technique that introduces representation of low-dimensional
features with enhanced discriminatory power and a proper classifier, able to
classify those complex features. Most of traditional Linear Discriminant
Analysis suffer from the disadvantage that their optimality criteria are not
directly related to the classification ability of the obtained feature
representation. Moreover, their classification accuracy is affected by the
"small sample size" problem which is often encountered in FR tasks. In this
short paper, we combine nonlinear kernel based mapping of data called KDDA with
Support Vector machine classifier to deal with both of the shortcomings in an
efficient and cost effective manner. The proposed here method is compared, in
terms of classification accuracy, to other commonly used FR methods on UMIST
face database. Results indicate that the performance of the proposed method is
overall superior to those of traditional FR approaches, such as the Eigenfaces,
Fisherfaces, and D-LDA methods and traditional linear classifiers.
| [
"Seyyed Majid Valiollahzadeh, Abolghasem Sayadiyan, Mohammad Nazari",
"['Seyyed Majid Valiollahzadeh' 'Abolghasem Sayadiyan' 'Mohammad Nazari']"
] |
cs.CV cs.LG | null | 0812.2575 | null | null | http://arxiv.org/pdf/0812.2575v1 | 2008-12-13T19:14:53Z | 2008-12-13T19:14:53Z | Face Detection Using Adaboosted SVM-Based Component Classifier | Recently, Adaboost has been widely used to improve the accuracy of any given
learning algorithm. In this paper we focus on designing an algorithm to employ
combination of Adaboost with Support Vector Machine as weak component
classifiers to be used in Face Detection Task. To obtain a set of effective
SVM-weaklearner Classifier, this algorithm adaptively adjusts the kernel
parameter in SVM instead of using a fixed one. Proposed combination outperforms
in generalization in comparison with SVM on imbalanced classification problem.
The proposed here method is compared, in terms of classification accuracy, to
other commonly used Adaboost methods, such as Decision Trees and Neural
Networks, on CMU+MIT face database. Results indicate that the performance of
the proposed method is overall superior to previous Adaboost approaches.
| [
"Seyyed Majid Valiollahzadeh, Abolghasem Sayadiyan, Mohammad Nazari",
"['Seyyed Majid Valiollahzadeh' 'Abolghasem Sayadiyan' 'Mohammad Nazari']"
] |
cs.LG | null | 0812.3145 | null | null | http://arxiv.org/pdf/0812.3145v2 | 2008-12-16T21:05:28Z | 2008-12-16T20:41:06Z | Binary Classification Based on Potentials | We introduce a simple and computationally trivial method for binary
classification based on the evaluation of potential functions. We demonstrate
that despite the conceptual and computational simplicity of the method its
performance can match or exceed that of standard Support Vector Machine
methods.
| [
"Erik Boczko, Andrew DiLullo and Todd Young",
"['Erik Boczko' 'Andrew DiLullo' 'Todd Young']"
] |
cs.LG cs.CG | null | 0812.3147 | null | null | http://arxiv.org/pdf/0812.3147v1 | 2008-12-16T20:58:24Z | 2008-12-16T20:58:24Z | Comparison of Binary Classification Based on Signed Distance Functions
with Support Vector Machines | We investigate the performance of a simple signed distance function (SDF)
based method by direct comparison with standard SVM packages, as well as
K-nearest neighbor and RBFN methods. We present experimental results comparing
the SDF approach with other classifiers on both synthetic geometric problems
and five benchmark clinical microarray data sets. On both geometric problems
and microarray data sets, the non-optimized SDF based classifiers perform just
as well or slightly better than well-developed, standard SVM methods. These
results demonstrate the potential accuracy of SDF-based methods on some types
of problems.
| [
"['Erik M. Boczko' 'Todd Young' 'Minhui Zie' 'Di Wu']",
"Erik M. Boczko, Todd Young, Minhui Zie, and Di Wu"
] |
quant-ph cs.LG | null | 0812.3429 | null | null | http://arxiv.org/pdf/0812.3429v3 | 2012-03-15T03:31:18Z | 2008-12-17T22:46:18Z | Quantum Predictive Learning and Communication Complexity with Single
Input | We define a new model of quantum learning that we call Predictive Quantum
(PQ). This is a quantum analogue of PAC, where during the testing phase the
student is only required to answer a polynomial number of testing queries.
We demonstrate a relational concept class that is efficiently learnable in
PQ, while in any "reasonable" classical model exponential amount of training
data would be required. This is the first unconditional separation between
quantum and classical learning.
We show that our separation is the best possible in several ways; in
particular, there is no analogous result for a functional class, as well as for
several weaker versions of quantum learning. In order to demonstrate tightness
of our separation we consider a special case of one-way communication that we
call single-input mode, where Bob receives no input. Somewhat surprisingly,
this setting becomes nontrivial when relational communication tasks are
considered. In particular, any problem with two-sided input can be transformed
into a single-input relational problem of equal classical one-way cost. We show
that the situation is different in the quantum case, where the same
transformation can make the communication complexity exponentially larger. This
happens if and only if the original problem has exponential gap between quantum
and classical one-way communication costs. We believe that these auxiliary
results might be of independent interest.
| [
"Dmytro Gavinsky",
"['Dmytro Gavinsky']"
] |
cs.LG | null | 0812.3465 | null | null | http://arxiv.org/pdf/0812.3465v2 | 2010-02-24T15:54:49Z | 2008-12-18T07:59:33Z | Linearly Parameterized Bandits | We consider bandit problems involving a large (possibly infinite) collection
of arms, in which the expected reward of each arm is a linear function of an
$r$-dimensional random vector $\mathbf{Z} \in \mathbb{R}^r$, where $r \geq 2$.
The objective is to minimize the cumulative regret and Bayes risk. When the set
of arms corresponds to the unit sphere, we prove that the regret and Bayes risk
is of order $\Theta(r \sqrt{T})$, by establishing a lower bound for an
arbitrary policy, and showing that a matching upper bound is obtained through a
policy that alternates between exploration and exploitation phases. The
phase-based policy is also shown to be effective if the set of arms satisfies a
strong convexity condition. For the case of a general set of arms, we describe
a near-optimal policy whose regret and Bayes risk admit upper bounds of the
form $O(r \sqrt{T} \log^{3/2} T)$.
| [
"['Paat Rusmevichientong' 'John N. Tsitsiklis']",
"Paat Rusmevichientong and John N. Tsitsiklis"
] |
cs.LG cs.AI | null | 0812.4044 | null | null | http://arxiv.org/pdf/0812.4044v3 | 2016-04-03T21:41:38Z | 2008-12-21T17:45:27Z | The Offset Tree for Learning with Partial Labels | We present an algorithm, called the Offset Tree, for learning to make
decisions in situations where the payoff of only one choice is observed, rather
than all choices. The algorithm reduces this setting to binary classification,
allowing one to reuse of any existing, fully supervised binary classification
algorithm in this partial information setting. We show that the Offset Tree is
an optimal reduction to binary classification. In particular, it has regret at
most $(k-1)$ times the regret of the binary classifier it uses (where $k$ is
the number of choices), and no reduction to binary classification can do
better. This reduction is also computationally optimal, both at training and
test time, requiring just $O(\log_2 k)$ work to train on an example or make a
prediction.
Experiments with the Offset Tree show that it generally performs better than
several alternative approaches.
| [
"Alina Beygelzimer and John Langford",
"['Alina Beygelzimer' 'John Langford']"
] |
cs.LG cs.AI | 10.1109/TNN.2010.2095882 | 0812.4235 | null | null | http://arxiv.org/abs/0812.4235v2 | 2010-01-11T15:37:43Z | 2008-12-22T16:34:39Z | Client-server multi-task learning from distributed datasets | A client-server architecture to simultaneously solve multiple learning tasks
from distributed datasets is described. In such architecture, each client is
associated with an individual learning task and the associated dataset of
examples. The goal of the architecture is to perform information fusion from
multiple datasets while preserving privacy of individual data. The role of the
server is to collect data in real-time from the clients and codify the
information in a common database. The information coded in this database can be
used by all the clients to solve their individual learning task, so that each
client can exploit the informative content of all the datasets without actually
having access to private data of others. The proposed algorithmic framework,
based on regularization theory and kernel methods, uses a suitable class of
mixed effect kernels. The new method is illustrated through a simulated music
recommendation system.
| [
"Francesco Dinuzzo, Gianluigi Pillonetto, Giuseppe De Nicolao",
"['Francesco Dinuzzo' 'Gianluigi Pillonetto' 'Giuseppe De Nicolao']"
] |
cs.CL cs.AI cs.LG | 10.1613/jair.2693 | 0812.4446 | null | null | http://arxiv.org/abs/0812.4446v1 | 2008-12-23T20:08:53Z | 2008-12-23T20:08:53Z | The Latent Relation Mapping Engine: Algorithm and Experiments | Many AI researchers and cognitive scientists have argued that analogy is the
core of cognition. The most influential work on computational modeling of
analogy-making is Structure Mapping Theory (SMT) and its implementation in the
Structure Mapping Engine (SME). A limitation of SME is the requirement for
complex hand-coded representations. We introduce the Latent Relation Mapping
Engine (LRME), which combines ideas from SME and Latent Relational Analysis
(LRA) in order to remove the requirement for hand-coded representations. LRME
builds analogical mappings between lists of words, using a large corpus of raw
text to automatically discover the semantic relations among the words. We
evaluate LRME on a set of twenty analogical mapping problems, ten based on
scientific analogies and ten based on common metaphors. LRME achieves
human-level performance on the twenty problems. We compare LRME with a variety
of alternative approaches and find that they are not able to reach the same
level of performance.
| [
"['Peter D. Turney']",
"Peter D. Turney (National Research Council of Canada)"
] |
cs.AI cs.IT cs.LG math.IT | null | 0812.4580 | null | null | http://arxiv.org/pdf/0812.4580v1 | 2008-12-25T00:27:22Z | 2008-12-25T00:27:22Z | Feature Markov Decision Processes | General purpose intelligent learning agents cycle through (complex,non-MDP)
sequences of observations, actions, and rewards. On the other hand,
reinforcement learning is well-developed for small finite state Markov Decision
Processes (MDPs). So far it is an art performed by human designers to extract
the right state representation out of the bare observations, i.e. to reduce the
agent setup to the MDP framework. Before we can think of mechanizing this
search for suitable MDPs, we need a formal objective criterion. The main
contribution of this article is to develop such a criterion. I also integrate
the various parts into one learning algorithm. Extensions to more realistic
dynamic Bayesian networks are developed in a companion article.
| [
"Marcus Hutter",
"['Marcus Hutter']"
] |
cs.AI cs.IT cs.LG math.IT | null | 0812.4581 | null | null | http://arxiv.org/pdf/0812.4581v1 | 2008-12-25T00:32:45Z | 2008-12-25T00:32:45Z | Feature Dynamic Bayesian Networks | Feature Markov Decision Processes (PhiMDPs) are well-suited for learning
agents in general environments. Nevertheless, unstructured (Phi)MDPs are
limited to relatively simple environments. Structured MDPs like Dynamic
Bayesian Networks (DBNs) are used for large-scale real-world problems. In this
article I extend PhiMDP to PhiDBN. The primary contribution is to derive a cost
criterion that allows to automatically extract the most relevant features from
the environment, leading to the "best" DBN representation. I discuss all
building blocks required for a complete general learning algorithm.
| [
"Marcus Hutter",
"['Marcus Hutter']"
] |
cs.LG | null | 0812.4952 | null | null | http://arxiv.org/pdf/0812.4952v4 | 2009-05-20T17:40:23Z | 2008-12-29T18:29:08Z | Importance Weighted Active Learning | We present a practical and statistically consistent scheme for actively
learning binary classifiers under general loss functions. Our algorithm uses
importance weighting to correct sampling bias, and by controlling the variance,
we are able to give rigorous label complexity bounds for the learning process.
Experiments on passively labeled data show that this approach reduces the label
complexity required to achieve good predictive performance on many learning
problems.
| [
"['Alina Beygelzimer' 'Sanjoy Dasgupta' 'John Langford']",
"Alina Beygelzimer, Sanjoy Dasgupta, and John Langford"
] |
cs.LG cs.AI cs.CV physics.soc-ph | null | 0812.5032 | null | null | http://arxiv.org/pdf/0812.5032v1 | 2008-12-30T08:30:27Z | 2008-12-30T08:30:27Z | A New Clustering Algorithm Based Upon Flocking On Complex Network | We have proposed a model based upon flocking on a complex network, and then
developed two clustering algorithms on the basis of it. In the algorithms,
firstly a \textit{k}-nearest neighbor (knn) graph as a weighted and directed
graph is produced among all data points in a dataset each of which is regarded
as an agent who can move in space, and then a time-varying complex network is
created by adding long-range links for each data point. Furthermore, each data
point is not only acted by its \textit{k} nearest neighbors but also \textit{r}
long-range neighbors through fields established in space by them together, so
it will take a step along the direction of the vector sum of all fields. It is
more important that these long-range links provides some hidden information for
each data point when it moves and at the same time accelerate its speed
converging to a center. As they move in space according to the proposed model,
data points that belong to the same class are located at a same position
gradually, whereas those that belong to different classes are away from one
another. Consequently, the experimental results have demonstrated that data
points in datasets are clustered reasonably and efficiently, and the rates of
convergence of clustering algorithms are fast enough. Moreover, the comparison
with other algorithms also provides an indication of the effectiveness of the
proposed approach.
| [
"['Qiang Li' 'Yan He' 'Jing-ping Jiang']",
"Qiang Li, Yan He, Jing-ping Jiang"
] |
cs.LG cs.CV cs.GT nlin.AO | 10.1016/j.eswa.2010.02.050 | 0812.5064 | null | null | http://arxiv.org/abs/0812.5064v2 | 2010-03-19T13:30:08Z | 2008-12-30T13:22:31Z | A Novel Clustering Algorithm Based Upon Games on Evolving Network | This paper introduces a model based upon games on an evolving network, and
develops three clustering algorithms according to it. In the clustering
algorithms, data points for clustering are regarded as players who can make
decisions in games. On the network describing relationships among data points,
an edge-removing-and-rewiring (ERR) function is employed to explore in a
neighborhood of a data point, which removes edges connecting to neighbors with
small payoffs, and creates new edges to neighbors with larger payoffs. As such,
the connections among data points vary over time. During the evolution of
network, some strategies are spread in the network. As a consequence, clusters
are formed automatically, in which data points with the same evolutionarily
stable strategy are collected as a cluster, so the number of evolutionarily
stable strategies indicates the number of clusters. Moreover, the experimental
results have demonstrated that data points in datasets are clustered reasonably
and efficiently, and the comparison with other algorithms also provides an
indication of the effectiveness of the proposed algorithms.
| [
"Qiang Li, Zhuo Chen, Yan He, Jing-ping Jiang",
"['Qiang Li' 'Zhuo Chen' 'Yan He' 'Jing-ping Jiang']"
] |
cs.IT cs.LG math.IT | null | 0901.0252 | null | null | http://arxiv.org/pdf/0901.0252v1 | 2009-01-02T16:46:05Z | 2009-01-02T16:46:05Z | MIMO decoding based on stochastic reconstruction from multiple
projections | Least squares (LS) fitting is one of the most fundamental techniques in
science and engineering. It is used to estimate parameters from multiple noisy
observations. In many problems the parameters are known a-priori to be bounded
integer valued, or they come from a finite set of values on an arbitrary finite
lattice. In this case finding the closest vector becomes NP-Hard problem. In
this paper we propose a novel algorithm, the Tomographic Least Squares Decoder
(TLSD), that not only solves the ILS problem, better than other sub-optimal
techniques, but also is capable of providing the a-posteriori probability
distribution for each element in the solution vector. The algorithm is based on
reconstruction of the vector from multiple two-dimensional projections. The
projections are carefully chosen to provide low computational complexity.
Unlike other iterative techniques, such as the belief propagation, the proposed
algorithm has ensured convergence. We also provide simulated experiments
comparing the algorithm to other sub-optimal algorithms.
| [
"['Amir Leshem' 'Jacob Goldberger']",
"Amir Leshem and Jacob Goldberger"
] |
cs.LG | null | 0901.0753 | null | null | http://arxiv.org/pdf/0901.0753v1 | 2009-01-07T04:36:58Z | 2009-01-07T04:36:58Z | Distributed Preemption Decisions: Probabilistic Graphical Model,
Algorithm and Near-Optimality | Cooperative decision making is a vision of future network management and
control. Distributed connection preemption is an important example where nodes
can make intelligent decisions on allocating resources and controlling traffic
flows for multi-class service networks. A challenge is that nodal decisions are
spatially dependent as traffic flows trespass multiple nodes in a network.
Hence the performance-complexity trade-off becomes important, i.e., how
accurate decisions are versus how much information is exchanged among nodes.
Connection preemption is known to be NP-complete. Centralized preemption is
optimal but computationally intractable. Decentralized preemption is
computationally efficient but may result in a poor performance. This work
investigates distributed preemption where nodes decide whether and which flows
to preempt using only local information exchange with neighbors. We develop,
based on the probabilistic graphical models, a near-optimal distributed
algorithm. The algorithm is used by each node to make collectively near-optimal
preemption decisions. We study trade-offs between near-optimal performance and
complexity that corresponds to the amount of information-exchange of the
distributed algorithm. The algorithm is validated by both analysis and
simulation.
| [
"['Sung-eok Jeon' 'Chuanyi Ji']",
"Sung-eok Jeon and Chuanyi Ji"
] |
cs.LG cs.CV | null | 0901.0760 | null | null | http://arxiv.org/pdf/0901.0760v2 | 2009-12-09T07:18:30Z | 2009-01-07T06:47:47Z | A Theoretical Analysis of Joint Manifolds | The emergence of low-cost sensor architectures for diverse modalities has
made it possible to deploy sensor arrays that capture a single event from a
large number of vantage points and using multiple modalities. In many
scenarios, these sensors acquire very high-dimensional data such as audio
signals, images, and video. To cope with such high-dimensional data, we
typically rely on low-dimensional models. Manifold models provide a
particularly powerful model that captures the structure of high-dimensional
data when it is governed by a low-dimensional set of parameters. However, these
models do not typically take into account dependencies among multiple sensors.
We thus propose a new joint manifold framework for data ensembles that exploits
such dependencies. We show that simple algorithms can exploit the joint
manifold structure to improve their performance on standard signal processing
applications. Additionally, recent results concerning dimensionality reduction
for manifolds enable us to formulate a network-scalable data compression scheme
that uses random projections of the sensed data. This scheme efficiently fuses
the data from all sensors through the addition of such projections, regardless
of the data modalities and dimensions.
| [
"Mark A. Davenport, Chinmay Hegde, Marco F. Duarte, and Richard G.\n Baraniuk",
"['Mark A. Davenport' 'Chinmay Hegde' 'Marco F. Duarte'\n 'Richard G. Baraniuk']"
] |
cs.IT cs.LG math.IT | 10.1109/TIT.2009.2015987 | 0901.1904 | null | null | http://arxiv.org/abs/0901.1904v1 | 2009-01-13T22:55:52Z | 2009-01-13T22:55:52Z | Joint universal lossy coding and identification of stationary mixing
sources with general alphabets | We consider the problem of joint universal variable-rate lossy coding and
identification for parametric classes of stationary $\beta$-mixing sources with
general (Polish) alphabets. Compression performance is measured in terms of
Lagrangians, while identification performance is measured by the variational
distance between the true source and the estimated source. Provided that the
sources are mixing at a sufficiently fast rate and satisfy certain smoothness
and Vapnik-Chervonenkis learnability conditions, it is shown that, for bounded
metric distortions, there exist universal schemes for joint lossy compression
and identification whose Lagrangian redundancies converge to zero as $\sqrt{V_n
\log n /n}$ as the block length $n$ tends to infinity, where $V_n$ is the
Vapnik-Chervonenkis dimension of a certain class of decision regions defined by
the $n$-dimensional marginal distributions of the sources; furthermore, for
each $n$, the decoder can identify $n$-dimensional marginal of the active
source up to a ball of radius $O(\sqrt{V_n\log n/n})$ in variational distance,
eventually with probability one. The results are supplemented by several
examples of parametric sources satisfying the regularity conditions.
| [
"Maxim Raginsky",
"['Maxim Raginsky']"
] |
cs.IT cs.LG math.IT | null | 0901.1905 | null | null | http://arxiv.org/pdf/0901.1905v2 | 2009-04-30T15:31:14Z | 2009-01-13T23:03:26Z | Achievability results for statistical learning under communication
constraints | The problem of statistical learning is to construct an accurate predictor of
a random variable as a function of a correlated random variable on the basis of
an i.i.d. training sample from their joint distribution. Allowable predictors
are constrained to lie in some specified class, and the goal is to approach
asymptotically the performance of the best predictor in the class. We consider
two settings in which the learning agent only has access to rate-limited
descriptions of the training data, and present information-theoretic bounds on
the predictor performance achievable in the presence of these communication
constraints. Our proofs do not assume any separation structure between
compression and learning and rely on a new class of operational criteria
specifically tailored to joint design of encoders and learning algorithms in
rate-constrained settings.
| [
"Maxim Raginsky",
"['Maxim Raginsky']"
] |
cs.LG | null | 0901.2376 | null | null | http://arxiv.org/pdf/0901.2376v1 | 2009-01-16T01:00:39Z | 2009-01-16T01:00:39Z | A Limit Theorem in Singular Regression Problem | In statistical problems, a set of parameterized probability distributions is
used to estimate the true probability distribution. If Fisher information
matrix at the true distribution is singular, then it has been left unknown what
we can estimate about the true distribution from random samples. In this paper,
we study a singular regression problem and prove a limit theorem which shows
the relation between the singular regression problem and two birational
invariants, a real log canonical threshold and a singular fluctuation. The
obtained theorem has an important application to statistics, because it enables
us to estimate the generalization error from the training error without any
knowledge of the true probability distribution.
| [
"['Sumio Watanabe']",
"Sumio Watanabe"
] |
cs.LG stat.ML | null | 0901.3150 | null | null | http://arxiv.org/pdf/0901.3150v4 | 2009-09-17T09:26:46Z | 2009-01-20T21:32:57Z | Matrix Completion from a Few Entries | Let M be a random (alpha n) x n matrix of rank r<<n, and assume that a
uniformly random subset E of its entries is observed. We describe an efficient
algorithm that reconstructs M from |E| = O(rn) observed entries with relative
root mean square error RMSE <= C(rn/|E|)^0.5 . Further, if r=O(1), M can be
reconstructed exactly from |E| = O(n log(n)) entries. These results apply
beyond random matrices to general low-rank incoherent matrices.
This settles (in the case of bounded rank) a question left open by Candes and
Recht and improves over the guarantees for their reconstruction algorithm. The
complexity of our algorithm is O(|E|r log(n)), which opens the way to its use
for massive data sets. In the process of proving these statements, we obtain a
generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek
on the spectrum of sparse random matrices.
| [
"['Raghunandan H. Keshavan' 'Andrea Montanari' 'Sewoong Oh']",
"Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh"
] |
cs.LG stat.ML | null | 0901.3202 | null | null | http://arxiv.org/pdf/0901.3202v1 | 2009-01-21T08:05:19Z | 2009-01-21T08:05:19Z | Model-Consistent Sparse Estimation through the Bootstrap | We consider the least-square linear regression problem with regularization by
the $\ell^1$-norm, a problem usually referred to as the Lasso. In this paper,
we first present a detailed asymptotic analysis of model consistency of the
Lasso in low-dimensional settings. For various decays of the regularization
parameter, we compute asymptotic equivalents of the probability of correct
model selection. For a specific rate decay, we show that the Lasso selects all
the variables that should enter the model with probability tending to one
exponentially fast, while it selects all other variables with strictly positive
probability. We show that this property implies that if we run the Lasso for
several bootstrapped replications of a given sample, then intersecting the
supports of the Lasso bootstrap estimates leads to consistent model selection.
This novel variable selection procedure, referred to as the Bolasso, is
extended to high-dimensional settings by a provably consistent two-step
procedure.
| [
"['Francis Bach']",
"Francis Bach (INRIA Rocquencourt)"
] |
cs.LG cs.CV | 10.1109/TPAMI.2010.47 | 0901.3590 | null | null | null | null | null | On the Dual Formulation of Boosting Algorithms | We study boosting algorithms from a new perspective. We show that the
Lagrange dual problems of AdaBoost, LogitBoost and soft-margin LPBoost with
generalized hinge loss are all entropy maximization problems. By looking at the
dual problems of these boosting algorithms, we show that the success of
boosting algorithms can be understood in terms of maintaining a better margin
distribution by maximizing margins and at the same time controlling the margin
variance.We also theoretically prove that, approximately, AdaBoost maximizes
the average margin, instead of the minimum margin. The duality formulation also
enables us to develop column generation based optimization algorithms, which
are totally corrective. We show that they exhibit almost identical
classification results to that of standard stage-wise additive boosting
algorithms but with much faster convergence rates. Therefore fewer weak
classifiers are needed to build the ensemble using our proposed optimization
technique.
| [
"Chunhua Shen and Hanxi Li"
] |
cs.LG | 10.1075/is.12.1.05fon | 0901.4012 | null | null | http://arxiv.org/abs/0901.4012v3 | 2009-11-28T20:11:11Z | 2009-01-26T15:12:13Z | Cross-situational and supervised learning in the emergence of
communication | Scenarios for the emergence or bootstrap of a lexicon involve the repeated
interaction between at least two agents who must reach a consensus on how to
name N objects using H words. Here we consider minimal models of two types of
learning algorithms: cross-situational learning, in which the individuals
determine the meaning of a word by looking for something in common across all
observed uses of that word, and supervised operant conditioning learning, in
which there is strong feedback between individuals about the intended meaning
of the words. Despite the stark differences between these learning schemes, we
show that they yield the same communication accuracy in the realistic limits of
large N and H, which coincides with the result of the classical occupancy
problem of randomly assigning N objects to H words.
| [
"['José F. Fontanari' 'Angelo Cangelosi']",
"Jos\\'e F. Fontanari and Angelo Cangelosi"
] |
math.ST cs.LG stat.ML stat.TH | null | 0901.4137 | null | null | http://arxiv.org/pdf/0901.4137v1 | 2009-01-26T23:05:06Z | 2009-01-26T23:05:06Z | Practical Robust Estimators for the Imprecise Dirichlet Model | Walley's Imprecise Dirichlet Model (IDM) for categorical i.i.d. data extends
the classical Dirichlet model to a set of priors. It overcomes several
fundamental problems which other approaches to uncertainty suffer from. Yet, to
be useful in practice, one needs efficient ways for computing the
imprecise=robust sets or intervals. The main objective of this work is to
derive exact, conservative, and approximate, robust and credible interval
estimates under the IDM for a large class of statistical estimators, including
the entropy and mutual information.
| [
"Marcus Hutter",
"['Marcus Hutter']"
] |
cs.IT cs.LG math.IT stat.CO | 10.1109/ISIT.2009.5205777 | 0901.4192 | null | null | http://arxiv.org/abs/0901.4192v3 | 2009-07-04T03:25:13Z | 2009-01-27T08:24:57Z | Fixing Convergence of Gaussian Belief Propagation | Gaussian belief propagation (GaBP) is an iterative message-passing algorithm
for inference in Gaussian graphical models. It is known that when GaBP
converges it converges to the correct MAP estimate of the Gaussian random
vector and simple sufficient conditions for its convergence have been
established. In this paper we develop a double-loop algorithm for forcing
convergence of GaBP. Our method computes the correct MAP estimate even in cases
where standard GaBP would not have converged. We further extend this
construction to compute least-squares solutions of over-constrained linear
systems. We believe that our construction has numerous applications, since the
GaBP algorithm is linked to solution of linear systems of equations, which is a
fundamental problem in computer science and engineering. As a case study, we
discuss the linear detection problem. We show that using our new construction,
we are able to force convergence of Montanari's linear detection algorithm, in
cases where it would originally fail. As a consequence, we are able to increase
significantly the number of users that can transmit concurrently.
| [
"['Jason K. Johnson' 'Danny Bickson' 'Danny Dolev']",
"Jason K. Johnson, Danny Bickson and Danny Dolev"
] |
cs.LG cs.DM | null | 0901.4876 | null | null | http://arxiv.org/pdf/0901.4876v1 | 2009-01-30T12:44:29Z | 2009-01-30T12:44:29Z | Non-Confluent NLC Graph Grammar Inference by Compressing Disjoint
Subgraphs | Grammar inference deals with determining (preferable simple) models/grammars
consistent with a set of observations. There is a large body of research on
grammar inference within the theory of formal languages. However, there is
surprisingly little known on grammar inference for graph grammars. In this
paper we take a further step in this direction and work within the framework of
node label controlled (NLC) graph grammars. Specifically, we characterize,
given a set of disjoint and isomorphic subgraphs of a graph $G$, whether or not
there is a NLC graph grammar rule which can generate these subgraphs to obtain
$G$. This generalizes previous results by assuming that the set of isomorphic
subgraphs is disjoint instead of non-touching. This leads naturally to consider
the more involved ``non-confluent'' graph grammar rules.
| [
"Hendrik Blockeel, Robert Brijder",
"['Hendrik Blockeel' 'Robert Brijder']"
] |
stat.ML cs.LG | null | 0902.0392 | null | null | http://arxiv.org/pdf/0902.0392v2 | 2011-09-21T08:13:36Z | 2009-02-02T22:37:23Z | Tree Exploration for Bayesian RL Exploration | Research in reinforcement learning has produced algorithms for optimal
decision making under uncertainty that fall within two main types. The first
employs a Bayesian framework, where optimality improves with increased
computational time. This is because the resulting planning task takes the form
of a dynamic programming problem on a belief tree with an infinite number of
states. The second type employs relatively simple algorithm which are shown to
suffer small regret within a distribution-free framework. This paper presents a
lower bound and a high probability upper bound on the optimal value function
for the nodes in the Bayesian belief tree, which are analogous to similar
bounds in POMDPs. The bounds are then used to create more efficient strategies
for exploring the tree. The resulting algorithms are compared with the
distribution-free algorithm UCB1, as well as a simpler baseline algorithm on
multi-armed bandit problems.
| [
"['Christos Dimitrakakis']",
"Christos Dimitrakakis"
] |
cs.AI cs.LG | null | 0902.1227 | null | null | http://arxiv.org/pdf/0902.1227v2 | 2009-12-11T06:18:30Z | 2009-02-07T07:50:02Z | Discovering general partial orders in event streams | Frequent episode discovery is a popular framework for pattern discovery in
event streams. An episode is a partially ordered set of nodes with each node
associated with an event type. Efficient (and separate) algorithms exist for
episode discovery when the associated partial order is total (serial episode)
and trivial (parallel episode). In this paper, we propose efficient algorithms
for discovering frequent episodes with general partial orders. These algorithms
can be easily specialized to discover serial or parallel episodes. Also, the
algorithms are flexible enough to be specialized for mining in the space of
certain interesting subclasses of partial orders. We point out that there is an
inherent combinatorial explosion in frequent partial order mining and most
importantly, frequency alone is not a sufficient measure of interestingness. We
propose a new interestingness measure for general partial order episodes and a
discovery method based on this measure, for filtering out uninteresting partial
orders. Simulations demonstrate the effectiveness of our algorithms.
| [
"['Avinash Achar' 'Srivatsan Laxman' 'Raajay Viswanathan' 'P. S. Sastry']",
"Avinash Achar, Srivatsan Laxman, Raajay Viswanathan and P. S. Sastry"
] |
cs.LG | null | 0902.1258 | null | null | http://arxiv.org/pdf/0902.1258v1 | 2009-02-07T18:01:09Z | 2009-02-07T18:01:09Z | Extraction de concepts sous contraintes dans des donn\'ees d'expression
de g\`enes | In this paper, we propose a technique to extract constrained formal concepts.
| [
"Baptiste Jeudy (LAHC), Fran\\c{c}ois Rioult (GREYC)",
"['Baptiste Jeudy' 'François Rioult']"
] |
cs.LG | null | 0902.1259 | null | null | http://arxiv.org/pdf/0902.1259v1 | 2009-02-07T18:01:56Z | 2009-02-07T18:01:56Z | Database Transposition for Constrained (Closed) Pattern Mining | Recently, different works proposed a new way to mine patterns in databases
with pathological size. For example, experiments in genome biology usually
provide databases with thousands of attributes (genes) but only tens of objects
(experiments). In this case, mining the "transposed" database runs through a
smaller search space, and the Galois connection allows to infer the closed
patterns of the original database. We focus here on constrained pattern mining
for those unusual databases and give a theoretical framework for database and
constraint transposition. We discuss the properties of constraint transposition
and look into classical constraints. We then address the problem of generating
the closed patterns of the original database satisfying the constraint,
starting from those mined in the "transposed" database. Finally, we show how to
generate all the patterns satisfying the constraint from the closed ones.
| [
"Baptiste Jeudy (LAHC, EURISE), Fran\\c{c}ois Rioult (GREYC)",
"['Baptiste Jeudy' 'François Rioult']"
] |
cs.LG | null | 0902.1284 | null | null | http://arxiv.org/pdf/0902.1284v2 | 2009-06-02T16:23:28Z | 2009-02-08T02:30:06Z | Multi-Label Prediction via Compressed Sensing | We consider multi-label prediction problems with large output spaces under
the assumption of output sparsity -- that the target (label) vectors have small
support. We develop a general theory for a variant of the popular error
correcting output code scheme, using ideas from compressed sensing for
exploiting this sparsity. The method can be regarded as a simple reduction from
multi-label regression problems to binary regression problems. We show that the
number of subproblems need only be logarithmic in the total number of possible
labels, making this approach radically more efficient than others. We also
state and prove robustness guarantees for this method in the form of regret
transform bounds (in general), and also provide a more detailed analysis for
the linear prediction setting.
| [
"Daniel Hsu, Sham M. Kakade, John Langford, Tong Zhang",
"['Daniel Hsu' 'Sham M. Kakade' 'John Langford' 'Tong Zhang']"
] |
cs.MA cs.LG | null | 0902.2751 | null | null | http://arxiv.org/pdf/0902.2751v4 | 2009-03-01T10:35:34Z | 2009-02-16T18:39:53Z | Object Classification by means of Multi-Feature Concept Learning in a
Multi Expert-Agent System | Classification of some objects in classes of concepts is an essential and
even breathtaking task in many applications. A solution is discussed here based
on Multi-Agent systems. A kernel of some expert agents in several classes is to
consult a central agent decide among the classification problem of a certain
object. This kernel is moderated with the center agent, trying to manage the
querying agents for any decision problem by means of a data-header like feature
set. Agents have cooperation among concepts related to the classes of this
classification decision-making; and may affect on each others' results on a
certain query object in a multi-agent learning approach. This leads to an
online feature learning via the consulting trend. The performance is discussed
to be much better in comparison to some other prior trends while system's
message passing overload is decreased to less agents and the expertism helps
the performance and operability of system win the comparison.
| [
"['Nima Mirbakhsh' 'Arman Didandeh']",
"Nima Mirbakhsh, Arman Didandeh"
] |
cs.AI cs.LG | null | 0902.3176 | null | null | http://arxiv.org/pdf/0902.3176v4 | 2010-02-03T15:03:58Z | 2009-02-18T16:01:24Z | Error-Correcting Tournaments | We present a family of pairwise tournaments reducing $k$-class classification
to binary classification. These reductions are provably robust against a
constant fraction of binary errors. The results improve on the PECOC
construction \cite{SECOC} with an exponential improvement in computation, from
$O(k)$ to $O(\log_2 k)$, and the removal of a square root in the regret
dependence, matching the best possible computation and regret up to a constant.
| [
"Alina Beygelzimer, John Langford, and Pradeep Ravikumar",
"['Alina Beygelzimer' 'John Langford' 'Pradeep Ravikumar']"
] |
cs.LG cs.DM cs.DS | null | 0902.3223 | null | null | http://arxiv.org/pdf/0902.3223v1 | 2009-02-18T19:12:59Z | 2009-02-18T19:12:59Z | An Exact Algorithm for the Stratification Problem with Proportional
Allocation | We report a new optimal resolution for the statistical stratification problem
under proportional sampling allocation among strata. Consider a finite
population of N units, a random sample of n units selected from this population
and a number L of strata. Thus, we have to define which units belong to each
stratum so as to minimize the variance of a total estimator for one desired
variable of interest in each stratum,and consequently reduce the overall
variance for such quantity. In order to solve this problem, an exact algorithm
based on the concept of minimal path in a graph is proposed and assessed.
Computational results using real data from IBGE (Brazilian Central Statistical
Office) are provided.
| [
"['Jose Brito' 'Mauricio Lila' 'Flavio Montenegro' 'Nelson Maculan']",
"Jose Brito, Mauricio Lila, Flavio Montenegro, Nelson Maculan"
] |
cs.LG | null | 0902.3373 | null | null | http://arxiv.org/pdf/0902.3373v1 | 2009-02-19T13:47:53Z | 2009-02-19T13:47:53Z | Learning rules from multisource data for cardiac monitoring | This paper formalises the concept of learning symbolic rules from multisource
data in a cardiac monitoring context. Our sources, electrocardiograms and
arterial blood pressure measures, describe cardiac behaviours from different
viewpoints. To learn interpretable rules, we use an Inductive Logic Programming
(ILP) method. We develop an original strategy to cope with the dimensionality
issues caused by using this ILP technique on a rich multisource language. The
results show that our method greatly improves the feasibility and the
efficiency of the process while staying accurate. They also confirm the
benefits of using multiple sources to improve the diagnosis of cardiac
arrhythmias.
| [
"Marie-Odile Cordier (INRIA - Irisa), Elisa Fromont (LAHC), Ren\\'e\n Quiniou (INRIA - Irisa)",
"['Marie-Odile Cordier' 'Elisa Fromont' 'René Quiniou']"
] |
cs.LG cs.AI | null | 0902.3430 | null | null | http://arxiv.org/pdf/0902.3430v3 | 2023-11-30T22:47:15Z | 2009-02-19T18:42:16Z | Domain Adaptation: Learning Bounds and Algorithms | This paper addresses the general problem of domain adaptation which arises in
a variety of applications where the distribution of the labeled sample
available somewhat differs from that of the test data. Building on previous
work by Ben-David et al. (2007), we introduce a novel distance between
distributions, discrepancy distance, that is tailored to adaptation problems
with arbitrary loss functions. We give Rademacher complexity bounds for
estimating the discrepancy distance from finite samples for different loss
functions. Using this distance, we derive novel generalization bounds for
domain adaptation for a wide family of loss functions. We also present a series
of novel adaptation bounds for large classes of regularization-based
algorithms, including support vector machines and kernel ridge regression based
on the empirical discrepancy. This motivates our analysis of the problem of
minimizing the empirical discrepancy for various loss functions for which we
also give novel algorithms. We report the results of preliminary experiments
that demonstrate the benefits of our discrepancy minimization algorithms for
domain adaptation.
| [
"Yishay Mansour, Mehryar Mohri, Afshin Rostamizadeh",
"['Yishay Mansour' 'Mehryar Mohri' 'Afshin Rostamizadeh']"
] |
stat.ML cs.LG math.ST stat.TH | null | 0902.3526 | null | null | http://arxiv.org/pdf/0902.3526v2 | 2009-03-27T14:50:53Z | 2009-02-20T07:39:13Z | Online Multi-task Learning with Hard Constraints | We discuss multi-task online learning when a decision maker has to deal
simultaneously with M tasks. The tasks are related, which is modeled by
imposing that the M-tuple of actions taken by the decision maker needs to
satisfy certain constraints. We give natural examples of such restrictions and
then discuss a general class of tractable constraints, for which we introduce
computationally efficient ways of selecting actions, essentially by reducing to
an on-line shortest path problem. We briefly discuss "tracking" and "bandit"
versions of the problem and extend the model in various ways, including
non-additive global losses and uncountably infinite sets of tasks.
| [
"['Gabor Lugosi' 'Omiros Papaspiliopoulos' 'Gilles Stoltz']",
"Gabor Lugosi, Omiros Papaspiliopoulos, Gilles Stoltz (DMA, GREGH)"
] |
cs.LG | null | 0902.3846 | null | null | http://arxiv.org/pdf/0902.3846v1 | 2009-02-23T04:05:48Z | 2009-02-23T04:05:48Z | Uniqueness of Low-Rank Matrix Completion by Rigidity Theory | The problem of completing a low-rank matrix from a subset of its entries is
often encountered in the analysis of incomplete data sets exhibiting an
underlying factor model with applications in collaborative filtering, computer
vision and control. Most recent work had been focused on constructing efficient
algorithms for exact or approximate recovery of the missing matrix entries and
proving lower bounds for the number of known entries that guarantee a
successful recovery with high probability. A related problem from both the
mathematical and algorithmic point of view is the distance geometry problem of
realizing points in a Euclidean space from a given subset of their pairwise
distances. Rigidity theory answers basic questions regarding the uniqueness of
the realization satisfying a given partial set of distances. We observe that
basic ideas and tools of rigidity theory can be adapted to determine uniqueness
of low-rank matrix completion, where inner products play the role that
distances play in rigidity theory. This observation leads to an efficient
randomized algorithm for testing both local and global unique completion.
Crucial to our analysis is a new matrix, which we call the completion matrix,
that serves as the analogue of the rigidity matrix.
| [
"Amit Singer, Mihai Cucuringu",
"['Amit Singer' 'Mihai Cucuringu']"
] |
cs.LG | null | 0902.4127 | null | null | http://arxiv.org/pdf/0902.4127v2 | 2009-03-23T16:28:41Z | 2009-02-24T11:47:03Z | Prediction with expert evaluators' advice | We introduce a new protocol for prediction with expert advice in which each
expert evaluates the learner's and his own performance using a loss function
that may change over time and may be different from the loss functions used by
the other experts. The learner's goal is to perform better or not much worse
than each expert, as evaluated by that expert, for all experts simultaneously.
If the loss functions used by the experts are all proper scoring rules and all
mixable, we show that the defensive forecasting algorithm enjoys the same
performance guarantee as that attainable by the Aggregating Algorithm in the
standard setting and known to be optimal. This result is also applied to the
case of "specialist" (or "sleeping") experts. In this case, the defensive
forecasting algorithm reduces to a simple modification of the Aggregating
Algorithm.
| [
"Alexey Chernov and Vladimir Vovk",
"['Alexey Chernov' 'Vladimir Vovk']"
] |
cs.LG | null | 0902.4228 | null | null | http://arxiv.org/pdf/0902.4228v1 | 2009-02-24T20:38:32Z | 2009-02-24T20:38:32Z | Multiplicative updates For Non-Negative Kernel SVM | We present multiplicative updates for solving hard and soft margin support
vector machines (SVM) with non-negative kernels. They follow as a natural
extension of the updates for non-negative matrix factorization. No additional
param- eter setting, such as choosing learning, rate is required. Ex- periments
demonstrate rapid convergence to good classifiers. We analyze the rates of
asymptotic convergence of the up- dates and establish tight bounds. We test the
performance on several datasets using various non-negative kernels and report
equivalent generalization errors to that of a standard SVM.
| [
"Vamsi K. Potluru, Sergey M. Plis, Morten Morup, Vince D. Calhoun,\n Terran Lane",
"['Vamsi K. Potluru' 'Sergey M. Plis' 'Morten Morup' 'Vince D. Calhoun'\n 'Terran Lane']"
] |
cs.LG cs.IT math.IT | null | 0903.0064 | null | null | http://arxiv.org/pdf/0903.0064v2 | 2009-04-19T04:18:30Z | 2009-02-28T11:17:12Z | Manipulation Robustness of Collaborative Filtering Systems | A collaborative filtering system recommends to users products that similar
users like. Collaborative filtering systems influence purchase decisions, and
hence have become targets of manipulation by unscrupulous vendors. We provide
theoretical and empirical results demonstrating that while common nearest
neighbor algorithms, which are widely used in commercial systems, can be highly
susceptible to manipulation, two classes of collaborative filtering algorithms
which we refer to as linear and asymptotically linear are relatively robust.
These results provide guidance for the design of future collaborative filtering
systems.
| [
"Xiang Yan, Benjamin Van Roy",
"['Xiang Yan' 'Benjamin Van Roy']"
] |
cs.LG | null | 0903.1125 | null | null | http://arxiv.org/pdf/0903.1125v1 | 2009-03-05T22:39:46Z | 2009-03-05T22:39:46Z | Efficient Human Computation | Collecting large labeled data sets is a laborious and expensive task, whose
scaling up requires division of the labeling workload between many teachers.
When the number of classes is large, miscorrespondences between the labels
given by the different teachers are likely to occur, which, in the extreme
case, may reach total inconsistency. In this paper we describe how globally
consistent labels can be obtained, despite the absence of teacher coordination,
and discuss the possible efficiency of this process in terms of human labor. We
define a notion of label efficiency, measuring the ratio between the number of
globally consistent labels obtained and the number of labels provided by
distributed teachers. We show that the efficiency depends critically on the
ratio alpha between the number of data instances seen by a single teacher, and
the number of classes. We suggest several algorithms for the distributed
labeling problem, and analyze their efficiency as a function of alpha. In
addition, we provide an upper bound on label efficiency for the case of
completely uncoordinated teachers, and show that efficiency approaches 0 as the
ratio between the number of labels each teacher provides and the number of
classes drops (i.e. alpha goes to 0).
| [
"Ran Gilad-Bachrach, Aharon Bar-Hillel, Liat Ein-Dor",
"['Ran Gilad-Bachrach' 'Aharon Bar-Hillel' 'Liat Ein-Dor']"
] |
cs.MA cs.GT cs.LG | null | 0903.2282 | null | null | http://arxiv.org/pdf/0903.2282v1 | 2009-03-12T21:49:36Z | 2009-03-12T21:49:36Z | Multiagent Learning in Large Anonymous Games | In large systems, it is important for agents to learn to act effectively, but
sophisticated multi-agent learning algorithms generally do not scale. An
alternative approach is to find restricted classes of games where simple,
efficient algorithms converge. It is shown that stage learning efficiently
converges to Nash equilibria in large anonymous games if best-reply dynamics
converge. Two features are identified that improve convergence. First, rather
than making learning more difficult, more agents are actually beneficial in
many settings. Second, providing agents with statistical information about the
behavior of others can significantly reduce the number of observations needed.
| [
"['Ian A. Kash' 'Eric J. Friedman' 'Joseph Y. Halpern']",
"Ian A. Kash, Eric J. Friedman, Joseph Y. Halpern"
] |
null | null | 0903.2299 | null | null | http://arxiv.org/pdf/0903.2299v3 | 2013-07-08T15:17:20Z | 2009-03-13T13:47:03Z | Differential Contrastive Divergence | This paper has been retracted. | [
"['David McAllester']"
] |
cs.LG cs.AI | null | 0903.2851 | null | null | http://arxiv.org/pdf/0903.2851v2 | 2010-01-18T23:58:51Z | 2009-03-16T20:48:33Z | A parameter-free hedging algorithm | We study the problem of decision-theoretic online learning (DTOL). Motivated
by practical applications, we focus on DTOL when the number of actions is very
large. Previous algorithms for learning in this framework have a tunable
learning rate parameter, and a barrier to using online-learning in practical
applications is that it is not understood how to set this parameter optimally,
particularly when the number of actions is large.
In this paper, we offer a clean solution by proposing a novel and completely
parameter-free algorithm for DTOL. We introduce a new notion of regret, which
is more natural for applications with a large number of actions. We show that
our algorithm achieves good performance with respect to this new notion of
regret; in addition, it also achieves performance close to that of the best
bounds achieved by previous algorithms with optimally-tuned parameters,
according to previous notions of regret.
| [
"Kamalika Chaudhuri, Yoav Freund, Daniel Hsu",
"['Kamalika Chaudhuri' 'Yoav Freund' 'Daniel Hsu']"
] |
cs.LG cs.AI cs.CV | null | 0903.2862 | null | null | http://arxiv.org/pdf/0903.2862v2 | 2010-01-19T00:15:59Z | 2009-03-16T21:26:55Z | Tracking using explanation-based modeling | We study the tracking problem, namely, estimating the hidden state of an
object over time, from unreliable and noisy measurements. The standard
framework for the tracking problem is the generative framework, which is the
basis of solutions such as the Bayesian algorithm and its approximation, the
particle filters. However, the problem with these solutions is that they are
very sensitive to model mismatches. In this paper, motivated by online
learning, we introduce a new framework -- an {\em explanatory} framework -- for
tracking. We provide an efficient tracking algorithm for this framework. We
provide experimental results comparing our algorithm to the Bayesian algorithm
on simulated data. Our experiments show that when there are slight model
mismatches, our algorithm vastly outperforms the Bayesian algorithm.
| [
"Kamalika Chaudhuri, Yoav Freund, Daniel Hsu",
"['Kamalika Chaudhuri' 'Yoav Freund' 'Daniel Hsu']"
] |
cs.LG | 10.1134/S2070046609040013 | 0903.2870 | null | null | http://arxiv.org/abs/0903.2870v2 | 2009-06-24T14:10:45Z | 2009-03-16T22:52:06Z | On $p$-adic Classification | A $p$-adic modification of the split-LBG classification method is presented
in which first clusterings and then cluster centers are computed which locally
minimise an energy function. The outcome for a fixed dataset is independent of
the prime number $p$ with finitely many exceptions. The methods are applied to
the construction of $p$-adic classifiers in the context of learning.
| [
"['Patrick Erik Bradley']",
"Patrick Erik Bradley"
] |
cs.IT cs.LG math.IT math.ST stat.TH | null | 0903.2890 | null | null | http://arxiv.org/pdf/0903.2890v2 | 2010-05-28T08:33:21Z | 2009-03-17T01:39:01Z | Kalman Filtering with Intermittent Observations: Weak Convergence to a
Stationary Distribution | The paper studies the asymptotic behavior of Random Algebraic Riccati
Equations (RARE) arising in Kalman filtering when the arrival of the
observations is described by a Bernoulli i.i.d. process. We model the RARE as
an order-preserving, strongly sublinear random dynamical system (RDS). Under a
sufficient condition, stochastic boundedness, and using a limit-set dichotomy
result for order-preserving, strongly sublinear RDS, we establish the
asymptotic properties of the RARE: the sequence of random prediction error
covariance matrices converges weakly to a unique invariant distribution, whose
support exhibits fractal behavior. In particular, this weak convergence holds
under broad conditions and even when the observations arrival rate is below the
critical probability for mean stability. We apply the weak-Feller property of
the Markov process governing the RARE to characterize the support of the
limiting invariant distribution as the topological closure of a countable set
of points, which, in general, is not dense in the set of positive semi-definite
matrices. We use the explicit characterization of the support of the invariant
distribution and the almost sure ergodicity of the sample paths to easily
compute the moments of the invariant distribution. A one dimensional example
illustrates that the support is a fractured subset of the non-negative reals
with self-similarity properties.
| [
"Soummya Kar, Bruno Sinopoli, and Jose M. F. Moura",
"['Soummya Kar' 'Bruno Sinopoli' 'Jose M. F. Moura']"
] |
cs.LG cs.AI | null | 0903.2972 | null | null | http://arxiv.org/pdf/0903.2972v3 | 2009-05-20T18:44:07Z | 2009-03-17T14:24:13Z | Optimistic Simulated Exploration as an Incentive for Real Exploration | Many reinforcement learning exploration techniques are overly optimistic and
try to explore every state. Such exploration is impossible in environments with
the unlimited number of states. I propose to use simulated exploration with an
optimistic model to discover promising paths for real exploration. This reduces
the needs for the real exploration.
| [
"Ivo Danihelka",
"['Ivo Danihelka']"
] |
cs.MM cs.AI cs.LG | null | 0903.3103 | null | null | http://arxiv.org/pdf/0903.3103v1 | 2009-03-18T08:17:05Z | 2009-03-18T08:17:05Z | Efficiently Learning a Detection Cascade with Sparse Eigenvectors | In this work, we first show that feature selection methods other than
boosting can also be used for training an efficient object detector. In
particular, we introduce Greedy Sparse Linear Discriminant Analysis (GSLDA)
\cite{Moghaddam2007Fast} for its conceptual simplicity and computational
efficiency; and slightly better detection performance is achieved compared with
\cite{Viola2004Robust}. Moreover, we propose a new technique, termed Boosted
Greedy Sparse Linear Discriminant Analysis (BGSLDA), to efficiently train a
detection cascade. BGSLDA exploits the sample re-weighting property of boosting
and the class-separability criterion of GSLDA.
| [
"['Chunhua Shen' 'Sakrapee Paisitkriangkrai' 'Jian Zhang']",
"Chunhua Shen, Sakrapee Paisitkriangkrai, and Jian Zhang"
] |
cs.LG cs.IR | null | 0903.3257 | null | null | http://arxiv.org/pdf/0903.3257v1 | 2009-03-18T23:50:29Z | 2009-03-18T23:50:29Z | A New Local Distance-Based Outlier Detection Approach for Scattered
Real-World Data | Detecting outliers which are grossly different from or inconsistent with the
remaining dataset is a major challenge in real-world KDD applications. Existing
outlier detection methods are ineffective on scattered real-world datasets due
to implicit data patterns and parameter setting issues. We define a novel
"Local Distance-based Outlier Factor" (LDOF) to measure the {outlier-ness} of
objects in scattered datasets which addresses these issues. LDOF uses the
relative location of an object to its neighbours to determine the degree to
which the object deviates from its neighbourhood. Properties of LDOF are
theoretically analysed including LDOF's lower bound and its false-detection
probability, as well as parameter settings. In order to facilitate parameter
settings in real-world applications, we employ a top-n technique in our outlier
detection approach, where only the objects with the highest LDOF values are
regarded as outliers. Compared to conventional approaches (such as top-n KNN
and top-n LOF), our method top-n LDOF is more effective at detecting outliers
in scattered data. It is also easier to set parameters, since its performance
is relatively stable over a large range of parameter values, as illustrated by
experimental results on both real-world and synthetic datasets.
| [
"Ke Zhang and Marcus Hutter and Huidong Jin",
"['Ke Zhang' 'Marcus Hutter' 'Huidong Jin']"
] |
cs.LG stat.AP | null | 0903.3329 | null | null | http://arxiv.org/pdf/0903.3329v1 | 2009-03-19T13:44:35Z | 2009-03-19T13:44:35Z | Optimal Policies Search for Sensor Management | This paper introduces a new approach to solve sensor management problems.
Classically sensor management problems can be well formalized as
Partially-Observed Markov Decision Processes (POMPD). The original approach
developped here consists in deriving the optimal parameterized policy based on
a stochastic gradient estimation. We assume in this work that it is possible to
learn the optimal policy off-line (in simulation) using models of the
environement and of the sensor(s). The learned policy can then be used to
manage the sensor(s). In order to approximate the gradient in a stochastic
context, we introduce a new method to approximate the gradient, based on
Infinitesimal Perturbation Approximation (IPA). The effectiveness of this
general framework is illustrated by the managing of an Electronically Scanned
Array Radar. First simulations results are finally proposed.
| [
"['Thomas Bréhard' 'Emmanuel Duflos' 'Philippe Vanheeghe'\n 'Pierre-Arnaud Coquelin']",
"Thomas Br\\'ehard (INRIA Futurs), Emmanuel Duflos (INRIA Futurs,\n LAGIS), Philippe Vanheeghe (LAGIS), Pierre-Arnaud Coquelin (INRIA Futurs)"
] |
cs.LG cs.IT math.IT math.PR | null | 0903.3667 | null | null | http://arxiv.org/pdf/0903.3667v5 | 2011-01-02T08:43:03Z | 2009-03-21T14:16:05Z | How random are a learner's mistakes? | Given a random binary sequence $X^{(n)}$ of random variables, $X_{t},$
$t=1,2,...,n$, for instance, one that is generated by a Markov source (teacher)
of order $k^{*}$ (each state represented by $k^{*}$ bits). Assume that the
probability of the event $X_{t}=1$ is constant and denote it by $\beta$.
Consider a learner which is based on a parametric model, for instance a Markov
model of order $k$, who trains on a sequence $x^{(m)}$ which is randomly drawn
by the teacher. Test the learner's performance by giving it a sequence
$x^{(n)}$ (generated by the teacher) and check its predictions on every bit of
$x^{(n)}.$ An error occurs at time $t$ if the learner's prediction $Y_{t}$
differs from the true bit value $X_{t}$. Denote by $\xi^{(n)}$ the sequence of
errors where the error bit $\xi_{t}$ at time $t$ equals 1 or 0 according to
whether the event of an error occurs or not, respectively. Consider the
subsequence $\xi^{(\nu)}$ of $\xi^{(n)}$ which corresponds to the errors of
predicting a 0, i.e., $\xi^{(\nu)}$ consists of the bits of $\xi^{(n)}$ only at
times $t$ such that $Y_{t}=0.$ In this paper we compute an estimate on the
deviation of the frequency of 1s of $\xi^{(\nu)}$ from $\beta$. The result
shows that the level of randomness of $\xi^{(\nu)}$ decreases relative to an
increase in the complexity of the learner.
| [
"Joel Ratsaby",
"['Joel Ratsaby']"
] |
cs.LG cs.AI | null | 0903.4217 | null | null | http://arxiv.org/pdf/0903.4217v2 | 2009-06-03T21:19:34Z | 2009-03-25T00:28:44Z | Conditional Probability Tree Estimation Analysis and Algorithms | We consider the problem of estimating the conditional probability of a label
in time $O(\log n)$, where $n$ is the number of possible labels. We analyze a
natural reduction of this problem to a set of binary regression problems
organized in a tree structure, proving a regret bound that scales with the
depth of the tree. Motivated by this analysis, we propose the first online
algorithm which provably constructs a logarithmic depth tree on the set of
labels to solve this problem. We test the algorithm empirically, showing that
it works succesfully on a dataset with roughly $10^6$ labels.
| [
"['Alina Beygelzimer' 'John Langford' 'Yuri Lifshits' 'Gregory Sorkin'\n 'Alex Strehl']",
"Alina Beygelzimer, John Langford, Yuri Lifshits, Gregory Sorkin, and\n Alex Strehl"
] |
cs.DM cs.LG | null | 0903.4527 | null | null | http://arxiv.org/pdf/0903.4527v2 | 2009-11-14T05:41:45Z | 2009-03-26T08:32:33Z | Graph polynomials and approximation of partition functions with Loopy
Belief Propagation | The Bethe approximation, or loopy belief propagation algorithm is a
successful method for approximating partition functions of probabilistic models
associated with a graph. Chertkov and Chernyak derived an interesting formula
called Loop Series Expansion, which is an expansion of the partition function.
The main term of the series is the Bethe approximation while other terms are
labeled by subgraphs called generalized loops. In our recent paper, we derive
the loop series expansion in form of a polynomial with coefficients positive
integers, and extend the result to the expansion of marginals. In this paper,
we give more clear derivation for the results and discuss the properties of the
polynomial which is introduced in the paper.
| [
"['Yusuke Watanabe' 'Kenji Fukumizu']",
"Yusuke Watanabe, Kenji Fukumizu"
] |
cs.LG cs.CG cs.CV math.OC stat.ML | null | 0903.4817 | null | null | http://arxiv.org/pdf/0903.4817v3 | 2012-10-25T23:47:12Z | 2009-03-27T17:23:31Z | An Exponential Lower Bound on the Complexity of Regularization Paths | For a variety of regularized optimization problems in machine learning,
algorithms computing the entire solution path have been developed recently.
Most of these methods are quadratic programs that are parameterized by a single
parameter, as for example the Support Vector Machine (SVM). Solution path
algorithms do not only compute the solution for one particular value of the
regularization parameter but the entire path of solutions, making the selection
of an optimal parameter much easier.
It has been assumed that these piecewise linear solution paths have only
linear complexity, i.e. linearly many bends. We prove that for the support
vector machine this complexity can be exponential in the number of training
points in the worst case. More strongly, we construct a single instance of n
input points in d dimensions for an SVM such that at least \Theta(2^{n/2}) =
\Theta(2^d) many distinct subsets of support vectors occur as the
regularization parameter changes.
| [
"['Bernd Gärtner' 'Martin Jaggi' 'Clément Maria']",
"Bernd G\\\"artner, Martin Jaggi and Cl\\'ement Maria"
] |
cs.LG cs.AI cs.CV | null | 0903.4856 | null | null | http://arxiv.org/pdf/0903.4856v1 | 2009-03-27T18:16:04Z | 2009-03-27T18:16:04Z | A Combinatorial Algorithm to Compute Regularization Paths | For a wide variety of regularization methods, algorithms computing the entire
solution path have been developed recently. Solution path algorithms do not
only compute the solution for one particular value of the regularization
parameter but the entire path of solutions, making the selection of an optimal
parameter much easier. Most of the currently used algorithms are not robust in
the sense that they cannot deal with general or degenerate input. Here we
present a new robust, generic method for parametric quadratic programming. Our
algorithm directly applies to nearly all machine learning applications, where
so far every application required its own different algorithm.
We illustrate the usefulness of our method by applying it to a very low rank
problem which could not be solved by existing path tracking methods, namely to
compute part-worth values in choice based conjoint analysis, a popular
technique from market research to estimate consumers preferences on a class of
parameterized options.
| [
"Bernd G\\\"artner, Joachim Giesen, Martin Jaggi and Torsten Welsch",
"['Bernd Gärtner' 'Joachim Giesen' 'Martin Jaggi' 'Torsten Welsch']"
] |
cs.LG cond-mat.dis-nn physics.data-an | 10.1016/j.physa.2009.08.030 | 0903.4860 | null | null | http://arxiv.org/abs/0903.4860v1 | 2009-03-27T17:29:16Z | 2009-03-27T17:29:16Z | Learning Multiple Belief Propagation Fixed Points for Real Time
Inference | In the context of inference with expectation constraints, we propose an
approach based on the "loopy belief propagation" algorithm LBP, as a surrogate
to an exact Markov Random Field MRF modelling. A prior information composed of
correlations among a large set of N variables, is encoded into a graphical
model; this encoding is optimized with respect to an approximate decoding
procedure LBP, which is used to infer hidden variables from an observed subset.
We focus on the situation where the underlying data have many different
statistical components, representing a variety of independent patterns.
Considering a single parameter family of models we show how LBP may be used to
encode and decode efficiently such information, without solving the NP hard
inverse problem yielding the optimal MRF. Contrary to usual practice, we work
in the non-convex Bethe free energy minimization framework, and manage to
associate a belief propagation fixed point to each component of the underlying
probabilistic mixture. The mean field limit is considered and yields an exact
connection with the Hopfield model at finite temperature and steady state, when
the number of mixture components is proportional to the number of variables. In
addition, we provide an enhanced learning procedure, based on a straightforward
multi-parameter extension of the model in conjunction with an effective
continuous optimization procedure. This is performed using the stochastic
search heuristic CMAES and yields a significant improvement with respect to the
single parameter basic model.
| [
"Cyril Furtlehner, Jean-Marc Lasgouttes and Anne Auger",
"['Cyril Furtlehner' 'Jean-Marc Lasgouttes' 'Anne Auger']"
] |
cs.AI cs.LG cs.RO | null | 0903.4930 | null | null | http://arxiv.org/pdf/0903.4930v1 | 2009-03-28T01:09:00Z | 2009-03-28T01:09:00Z | Time manipulation technique for speeding up reinforcement learning in
simulations | A technique for speeding up reinforcement learning algorithms by using time
manipulation is proposed. It is applicable to failure-avoidance control
problems running in a computer simulation. Turning the time of the simulation
backwards on failure events is shown to speed up the learning by 260% and
improve the state space exploration by 12% on the cart-pole balancing task,
compared to the conventional Q-learning and Actor-Critic algorithms.
| [
"['Petar Kormushev' 'Kohei Nomoto' 'Fangyan Dong' 'Kaoru Hirota']",
"Petar Kormushev, Kohei Nomoto, Fangyan Dong, Kaoru Hirota"
] |
cs.LG stat.ML | null | 0903.5328 | null | null | http://arxiv.org/pdf/0903.5328v1 | 2009-03-30T22:08:02Z | 2009-03-30T22:08:02Z | A Stochastic View of Optimal Regret through Minimax Duality | We study the regret of optimal strategies for online convex optimization
games. Using von Neumann's minimax theorem, we show that the optimal regret in
this adversarial setting is closely related to the behavior of the empirical
minimization algorithm in a stochastic process setting: it is equal to the
maximum, over joint distributions of the adversary's action sequence, of the
difference between a sum of minimal expected losses and the minimal empirical
loss. We show that the optimal regret has a natural geometric interpretation,
since it can be viewed as the gap in Jensen's inequality for a concave
functional--the minimizer over the player's actions of expected loss--defined
on a set of probability distributions. We use this expression to obtain upper
and lower bounds on the regret of an optimal strategy for a variety of online
learning problems. Our method provides upper bounds without the need to
construct a learning algorithm; the lower bounds provide explicit optimal
strategies for the adversary.
| [
"['Jacob Abernethy' 'Alekh Agarwal' 'Peter L. Bartlett' 'Alexander Rakhlin']",
"Jacob Abernethy, Alekh Agarwal, Peter L. Bartlett, Alexander Rakhlin"
] |
math.PR cs.LG math.ST stat.TH | null | 0903.5342 | null | null | http://arxiv.org/pdf/0903.5342v1 | 2009-03-30T23:24:08Z | 2009-03-30T23:24:08Z | Exact Non-Parametric Bayesian Inference on Infinite Trees | Given i.i.d. data from an unknown distribution, we consider the problem of
predicting future items. An adaptive way to estimate the probability density is
to recursively subdivide the domain to an appropriate data-dependent
granularity. A Bayesian would assign a data-independent prior probability to
"subdivide", which leads to a prior over infinite(ly many) trees. We derive an
exact, fast, and simple inference algorithm for such a prior, for the data
evidence, the predictive distribution, the effective model dimension, moments,
and other quantities. We prove asymptotic convergence and consistency results,
and illustrate the behavior of our model on some prototypical functions.
| [
"Marcus Hutter",
"['Marcus Hutter']"
] |
null | null | 0904.0545 | null | null | http://arxiv.org/pdf/0904.0545v2 | 2011-09-06T15:24:24Z | 2009-04-03T10:38:06Z | Time Hopping technique for faster reinforcement learning in simulations | This preprint has been withdrawn by the author for revision | [
"['Petar Kormushev' 'Kohei Nomoto' 'Fangyan Dong' 'Kaoru Hirota']"
] |
cs.AI cs.LG cs.RO | null | 0904.0546 | null | null | http://arxiv.org/pdf/0904.0546v1 | 2009-04-03T10:42:28Z | 2009-04-03T10:42:28Z | Eligibility Propagation to Speed up Time Hopping for Reinforcement
Learning | A mechanism called Eligibility Propagation is proposed to speed up the Time
Hopping technique used for faster Reinforcement Learning in simulations.
Eligibility Propagation provides for Time Hopping similar abilities to what
eligibility traces provide for conventional Reinforcement Learning. It
propagates values from one state to all of its temporal predecessors using a
state transitions graph. Experiments on a simulated biped crawling robot
confirm that Eligibility Propagation accelerates the learning process more than
3 times.
| [
"['Petar Kormushev' 'Kohei Nomoto' 'Fangyan Dong' 'Kaoru Hirota']",
"Petar Kormushev, Kohei Nomoto, Fangyan Dong, Kaoru Hirota"
] |
cs.AI cs.LG | 10.1109/TSP.2009.2034916 | 0904.0643 | null | null | http://arxiv.org/abs/0904.0643v1 | 2009-04-03T19:29:47Z | 2009-04-03T19:29:47Z | Performing Nonlinear Blind Source Separation with Signal Invariants | Given a time series of multicomponent measurements x(t), the usual objective
of nonlinear blind source separation (BSS) is to find a "source" time series
s(t), comprised of statistically independent combinations of the measured
components. In this paper, the source time series is required to have a density
function in (s,ds/dt)-space that is equal to the product of density functions
of individual components. This formulation of the BSS problem has a solution
that is unique, up to permutations and component-wise transformations.
Separability is shown to impose constraints on certain locally invariant
(scalar) functions of x, which are derived from local higher-order correlations
of the data's velocity dx/dt. The data are separable if and only if they
satisfy these constraints, and, if the constraints are satisfied, the sources
can be explicitly constructed from the data. The method is illustrated by using
it to separate two speech-like sounds recorded with a single microphone.
| [
"['David N. Levin']",
"David N. Levin (University of Chicago)"
] |
cs.LG cs.CC | null | 0904.0648 | null | null | http://arxiv.org/pdf/0904.0648v1 | 2009-04-03T20:30:24Z | 2009-04-03T20:30:24Z | Evolvability need not imply learnability | We show that Boolean functions expressible as monotone disjunctive normal
forms are PAC-evolvable under a uniform distribution on the Boolean cube if the
hypothesis size is allowed to remain fixed. We further show that this result is
insufficient to prove the PAC-learnability of monotone Boolean functions,
thereby demonstrating a counter-example to a recent claim to the contrary. We
further discuss scenarios wherein evolvability and learnability will coincide
as well as scenarios under which they differ. The implications of the latter
case on the prospects of learning in complex hypothesis spaces is briefly
examined.
| [
"['Nisheeth Srivastava']",
"Nisheeth Srivastava"
] |
stat.ML cs.LG | null | 0904.0776 | null | null | http://arxiv.org/pdf/0904.0776v1 | 2009-04-05T14:21:49Z | 2009-04-05T14:21:49Z | Induction of High-level Behaviors from Problem-solving Traces using
Machine Learning Tools | This paper applies machine learning techniques to student modeling. It
presents a method for discovering high-level student behaviors from a very
large set of low-level traces corresponding to problem-solving actions in a
learning environment. Basic actions are encoded into sets of domain-dependent
attribute-value patterns called cases. Then a domain-independent hierarchical
clustering identifies what we call general attitudes, yielding automatic
diagnosis expressed in natural language, addressed in principle to teachers.
The method can be applied to individual students or to entire groups, like a
class. We exhibit examples of this system applied to thousands of students'
actions in the domain of algebraic transformations.
| [
"['Vivien Robinet' 'Gilles Bisson' 'Mirta B. Gordon' 'Benoît Lemaire']",
"Vivien Robinet (Leibniz - IMAG, TIMC), Gilles Bisson (Leibniz - IMAG,\n TIMC), Mirta B. Gordon (Leibniz - IMAG, TIMC), Beno\\^it Lemaire (Leibniz -\n IMAG, TIMC)"
] |
cs.LG | null | 0904.0814 | null | null | http://arxiv.org/pdf/0904.0814v1 | 2009-04-05T20:08:44Z | 2009-04-05T20:08:44Z | Stability Analysis and Learning Bounds for Transductive Regression
Algorithms | This paper uses the notion of algorithmic stability to derive novel
generalization bounds for several families of transductive regression
algorithms, both by using convexity and closed-form solutions. Our analysis
helps compare the stability of these algorithms. It also shows that a number of
widely used transductive regression algorithms are in fact unstable. Finally,
it reports the results of experiments with local transductive regression
demonstrating the benefit of our stability bounds for model selection, for one
of the algorithms, in particular for determining the radius of the local
neighborhood used by the algorithm.
| [
"Corinna Cortes, Mehryar Mohri, Dmitry Pechyony, Ashish Rastogi",
"['Corinna Cortes' 'Mehryar Mohri' 'Dmitry Pechyony' 'Ashish Rastogi']"
] |
cs.LG cs.CG | null | 0904.1227 | null | null | http://arxiv.org/pdf/0904.1227v1 | 2009-04-07T21:15:42Z | 2009-04-07T21:15:42Z | Learning convex bodies is hard | We show that learning a convex body in $\RR^d$, given random samples from the
body, requires $2^{\Omega(\sqrt{d/\eps})}$ samples. By learning a convex body
we mean finding a set having at most $\eps$ relative symmetric difference with
the input body. To prove the lower bound we construct a hard to learn family of
convex bodies. Our construction of this family is very simple and based on
error correcting codes.
| [
"['Navin Goyal' 'Luis Rademacher']",
"Navin Goyal, Luis Rademacher"
] |
cs.AI cs.LG | null | 0904.1579 | null | null | http://arxiv.org/pdf/0904.1579v1 | 2009-04-09T18:26:36Z | 2009-04-09T18:26:36Z | Online prediction of ovarian cancer | In this paper we apply computer learning methods to diagnosing ovarian cancer
using the level of the standard biomarker CA125 in conjunction with information
provided by mass-spectrometry. We are working with a new data set collected
over a period of 7 years. Using the level of CA125 and mass-spectrometry peaks,
our algorithm gives probability predictions for the disease. To estimate
classification accuracy we convert probability predictions into strict
predictions. Our algorithm makes fewer errors than almost any linear
combination of the CA125 level and one peak's intensity (taken on the log
scale). To check the power of our algorithm we use it to test the hypothesis
that CA125 and the peaks do not contain useful information for the prediction
of the disease at a particular time before the diagnosis. Our algorithm
produces $p$-values that are better than those produced by the algorithm that
has been previously applied to this data set. Our conclusion is that the
proposed algorithm is more reliable for prediction on new data.
| [
"['Fedor Zhdanov' 'Vladimir Vovk' 'Brian Burford' 'Dmitry Devetyarov'\n 'Ilia Nouretdinov' 'Alex Gammerman']",
"Fedor Zhdanov, Vladimir Vovk, Brian Burford, Dmitry Devetyarov, Ilia\n Nouretdinov and Alex Gammerman"
] |
cond-mat.dis-nn cond-mat.stat-mech cs.LG | 10.1088/1742-5468/2009/07/P07026 | 0904.1700 | null | null | http://arxiv.org/abs/0904.1700v2 | 2009-06-09T13:08:39Z | 2009-04-10T15:19:14Z | Recovering the state sequence of hidden Markov models using mean-field
approximations | Inferring the sequence of states from observations is one of the most
fundamental problems in Hidden Markov Models. In statistical physics language,
this problem is equivalent to computing the marginals of a one-dimensional
model with a random external field. While this task can be accomplished through
transfer matrix methods, it becomes quickly intractable when the underlying
state space is large.
This paper develops several low-complexity approximate algorithms to address
this inference problem when the state space becomes large. The new algorithms
are based on various mean-field approximations of the transfer matrix. Their
performances are studied in detail on a simple realistic model for DNA
pyrosequencing.
| [
"Antoine Sinton",
"['Antoine Sinton']"
] |
cs.NE cs.LG | null | 0904.1888 | null | null | http://arxiv.org/pdf/0904.1888v1 | 2009-04-13T00:59:10Z | 2009-04-13T00:59:10Z | On Fodor on Darwin on Evolution | Jerry Fodor argues that Darwin was wrong about "natural selection" because
(1) it is only a tautology rather than a scientific law that can support
counterfactuals ("If X had happened, Y would have happened") and because (2)
only minds can select. Hence Darwin's analogy with "artificial selection" by
animal breeders was misleading and evolutionary explanation is nothing but
post-hoc historical narrative. I argue that Darwin was right on all counts.
| [
"['Stevan Harnad']",
"Stevan Harnad"
] |
cs.LG cs.CV | null | 0904.2037 | null | null | http://arxiv.org/pdf/0904.2037v3 | 2010-01-06T09:00:26Z | 2009-04-14T01:57:12Z | Boosting through Optimization of Margin Distributions | Boosting has attracted much research attention in the past decade. The
success of boosting algorithms may be interpreted in terms of the margin
theory. Recently it has been shown that generalization error of classifiers can
be obtained by explicitly taking the margin distribution of the training data
into account. Most of the current boosting algorithms in practice usually
optimizes a convex loss function and do not make use of the margin
distribution. In this work we design a new boosting algorithm, termed
margin-distribution boosting (MDBoost), which directly maximizes the average
margin and minimizes the margin variance simultaneously. This way the margin
distribution is optimized. A totally-corrective optimization algorithm based on
column generation is proposed to implement MDBoost. Experiments on UCI datasets
show that MDBoost outperforms AdaBoost and LPBoost in most cases.
| [
"['Chunhua Shen' 'Hanxi Li']",
"Chunhua Shen and Hanxi Li"
] |
cs.LG | null | 0904.2160 | null | null | http://arxiv.org/pdf/0904.2160v1 | 2009-04-14T17:32:00Z | 2009-04-14T17:32:00Z | Inferring Dynamic Bayesian Networks using Frequent Episode Mining | Motivation: Several different threads of research have been proposed for
modeling and mining temporal data. On the one hand, approaches such as dynamic
Bayesian networks (DBNs) provide a formal probabilistic basis to model
relationships between time-indexed random variables but these models are
intractable to learn in the general case. On the other, algorithms such as
frequent episode mining are scalable to large datasets but do not exhibit the
rigorous probabilistic interpretations that are the mainstay of the graphical
models literature.
Results: We present a unification of these two seemingly diverse threads of
research, by demonstrating how dynamic (discrete) Bayesian networks can be
inferred from the results of frequent episode mining. This helps bridge the
modeling emphasis of the former with the counting emphasis of the latter.
First, we show how, under reasonable assumptions on data characteristics and on
influences of random variables, the optimal DBN structure can be computed using
a greedy, local, algorithm. Next, we connect the optimality of the DBN
structure with the notion of fixed-delay episodes and their counts of distinct
occurrences. Finally, to demonstrate the practical feasibility of our approach,
we focus on a specific (but broadly applicable) class of networks, called
excitatory networks, and show how the search for the optimal DBN structure can
be conducted using just information from frequent episodes. Application on
datasets gathered from mathematical models of spiking neurons as well as real
neuroscience datasets are presented.
Availability: Algorithmic implementations, simulator codebases, and datasets
are available from our website at http://neural-code.cs.vt.edu/dbn
| [
"['Debprakash Patnaik' 'Srivatsan Laxman' 'Naren Ramakrishnan']",
"Debprakash Patnaik and Srivatsan Laxman and Naren Ramakrishnan"
] |
cs.MA cs.LG | null | 0904.2320 | null | null | http://arxiv.org/pdf/0904.2320v1 | 2009-04-15T13:49:42Z | 2009-04-15T13:49:42Z | Why Global Performance is a Poor Metric for Verifying Convergence of
Multi-agent Learning | Experimental verification has been the method of choice for verifying the
stability of a multi-agent reinforcement learning (MARL) algorithm as the
number of agents grows and theoretical analysis becomes prohibitively complex.
For cooperative agents, where the ultimate goal is to optimize some global
metric, the stability is usually verified by observing the evolution of the
global performance metric over time. If the global metric improves and
eventually stabilizes, it is considered a reasonable verification of the
system's stability.
The main contribution of this note is establishing the need for better
experimental frameworks and measures to assess the stability of large-scale
adaptive cooperative systems. We show an experimental case study where the
stability of the global performance metric can be rather deceiving, hiding an
underlying instability in the system that later leads to a significant drop in
performance. We then propose an alternative metric that relies on agents' local
policies and show, experimentally, that our proposed metric is more effective
(than the traditional global performance metric) in exposing the instability of
MARL algorithms.
| [
"Sherief Abdallah",
"['Sherief Abdallah']"
] |
cs.AI cs.LG | null | 0904.2595 | null | null | http://arxiv.org/pdf/0904.2595v1 | 2009-04-16T21:30:30Z | 2009-04-16T21:30:30Z | A Methodology for Learning Players' Styles from Game Records | We describe a preliminary investigation into learning a Chess player's style
from game records. The method is based on attempting to learn features of a
player's individual evaluation function using the method of temporal
differences, with the aid of a conventional Chess engine architecture. Some
encouraging results were obtained in learning the styles of two recent Chess
world champions, and we report on our attempt to use the learnt styles to
discriminate between the players from game records by trying to detect who was
playing white and who was playing black. We also discuss some limitations of
our approach and propose possible directions for future research. The method we
have presented may also be applicable to other strategic games, and may even be
generalisable to other domains where sequences of agents' actions are recorded.
| [
"Mark Levene and Trevor Fenner",
"['Mark Levene' 'Trevor Fenner']"
] |
cs.LG cs.AI | null | 0904.2623 | null | null | http://arxiv.org/pdf/0904.2623v2 | 2009-06-05T03:54:58Z | 2009-04-17T03:48:02Z | Exponential Family Graph Matching and Ranking | We present a method for learning max-weight matching predictors in bipartite
graphs. The method consists of performing maximum a posteriori estimation in
exponential families with sufficient statistics that encode permutations and
data features. Although inference is in general hard, we show that for one very
relevant application - web page ranking - exact inference is efficient. For
general model instances, an appropriate sampler is readily available. Contrary
to existing max-margin matching models, our approach is statistically
consistent and, in addition, experiments with increasing sample sizes indicate
superior improvement over such models. We apply the method to graph matching in
computer vision as well as to a standard benchmark dataset for learning web
page ranking, in which we obtain state-of-the-art results, in particular
improving on max-margin variants. The drawback of this method with respect to
max-margin alternatives is its runtime for large graphs, which is comparatively
high.
| [
"James Petterson, Tiberio Caetano, Julian McAuley, Jin Yu",
"['James Petterson' 'Tiberio Caetano' 'Julian McAuley' 'Jin Yu']"
] |
cs.DS cs.LG | null | 0904.3151 | null | null | http://arxiv.org/pdf/0904.3151v1 | 2009-04-21T01:03:06Z | 2009-04-21T01:03:06Z | Efficient Construction of Neighborhood Graphs by the Multiple Sorting
Method | Neighborhood graphs are gaining popularity as a concise data representation
in machine learning. However, naive graph construction by pairwise distance
calculation takes $O(n^2)$ runtime for $n$ data points and this is
prohibitively slow for millions of data points. For strings of equal length,
the multiple sorting method (Uno, 2008) can construct an $\epsilon$-neighbor
graph in $O(n+m)$ time, where $m$ is the number of $\epsilon$-neighbor pairs in
the data. To introduce this remarkably efficient algorithm to continuous
domains such as images, signals and texts, we employ a random projection method
to convert vectors to strings. Theoretical results are presented to elucidate
the trade-off between approximation quality and computation time. Empirical
results show the efficiency of our method in comparison to fast nearest
neighbor alternatives.
| [
"['Takeaki Uno' 'Masashi Sugiyama' 'Koji Tsuda']",
"Takeaki Uno, Masashi Sugiyama, Koji Tsuda"
] |
cs.AI cs.LG | null | 0904.3352 | null | null | http://arxiv.org/pdf/0904.3352v1 | 2009-04-21T22:07:24Z | 2009-04-21T22:07:24Z | Optimistic Initialization and Greediness Lead to Polynomial Time
Learning in Factored MDPs - Extended Version | In this paper we propose an algorithm for polynomial-time reinforcement
learning in factored Markov decision processes (FMDPs). The factored optimistic
initial model (FOIM) algorithm, maintains an empirical model of the FMDP in a
conventional way, and always follows a greedy policy with respect to its model.
The only trick of the algorithm is that the model is initialized
optimistically. We prove that with suitable initialization (i) FOIM converges
to the fixed point of approximate value iteration (AVI); (ii) the number of
steps when the agent makes non-near-optimal decisions (with respect to the
solution of AVI) is polynomial in all relevant quantities; (iii) the per-step
costs of the algorithm are also polynomial. To our best knowledge, FOIM is the
first algorithm with these properties. This extended version contains the
rigorous proofs of the main theorem. A version of this paper appeared in
ICML'09.
| [
"Istvan Szita, Andras Lorincz",
"['Istvan Szita' 'Andras Lorincz']"
] |
cs.LG | null | 0904.3664 | null | null | http://arxiv.org/pdf/0904.3664v1 | 2009-04-23T11:40:57Z | 2009-04-23T11:40:57Z | Introduction to Machine Learning: Class Notes 67577 | Introduction to Machine learning covering Statistical Inference (Bayes, EM,
ML/MaxEnt duality), algebraic and spectral methods (PCA, LDA, CCA, Clustering),
and PAC learning (the Formal model, VC dimension, Double Sampling theorem).
| [
"['Amnon Shashua']",
"Amnon Shashua"
] |
cs.LG cs.AI | null | 0904.3667 | null | null | http://arxiv.org/pdf/0904.3667v1 | 2009-04-23T11:48:38Z | 2009-04-23T11:48:38Z | Considerations upon the Machine Learning Technologies | Artificial intelligence offers superior techniques and methods by which
problems from diverse domains may find an optimal solution. The Machine
Learning technologies refer to the domain of artificial intelligence aiming to
develop the techniques allowing the computers to "learn". Some systems based on
Machine Learning technologies tend to eliminate the necessity of the human
intelligence while the others adopt a man-machine collaborative approach.
| [
"['Alin Munteanu' 'Cristina Ofelia Sofran']",
"Alin Munteanu, Cristina Ofelia Sofran"
] |
cs.LG | null | 0904.4527 | null | null | http://arxiv.org/pdf/0904.4527v1 | 2009-04-29T03:16:20Z | 2009-04-29T03:16:20Z | Limits of Learning about a Categorical Latent Variable under Prior
Near-Ignorance | In this paper, we consider the coherent theory of (epistemic) uncertainty of
Walley, in which beliefs are represented through sets of probability
distributions, and we focus on the problem of modeling prior ignorance about a
categorical random variable. In this setting, it is a known result that a state
of prior ignorance is not compatible with learning. To overcome this problem,
another state of beliefs, called \emph{near-ignorance}, has been proposed.
Near-ignorance resembles ignorance very closely, by satisfying some principles
that can arguably be regarded as necessary in a state of ignorance, and allows
learning to take place. What this paper does, is to provide new and substantial
evidence that also near-ignorance cannot be really regarded as a way out of the
problem of starting statistical inference in conditions of very weak beliefs.
The key to this result is focusing on a setting characterized by a variable of
interest that is \emph{latent}. We argue that such a setting is by far the most
common case in practice, and we provide, for the case of categorical latent
variables (and general \emph{manifest} variables) a condition that, if
satisfied, prevents learning to take place under prior near-ignorance. This
condition is shown to be easily satisfied even in the most common statistical
problems. We regard these results as a strong form of evidence against the
possibility to adopt a condition of prior near-ignorance in real statistical
problems.
| [
"['Alberto Piatti' 'Marco Zaffalon' 'Fabio Trojani' 'Marcus Hutter']",
"Alberto Piatti and Marco Zaffalon and Fabio Trojani and Marcus Hutter"
] |