categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG cs.CV | 10.1631/jzus.A0720058 | 0910.1650 | null | null | http://arxiv.org/abs/0910.1650v1 | 2009-10-09T04:55:41Z | 2009-10-09T04:55:41Z | Local and global approaches of affinity propagation clustering for large
scale data | Recently a new clustering algorithm called 'affinity propagation' (AP) has
been proposed, which efficiently clustered sparsely related data by passing
messages between data points. However, we want to cluster large scale data
where the similarities are not sparse in many cases. This paper presents two
variants of AP for grouping large scale data with a dense similarity matrix.
The local approach is partition affinity propagation (PAP) and the global
method is landmark affinity propagation (LAP). PAP passes messages in the
subsets of data first and then merges them as the number of initial step of
iterations; it can effectively reduce the number of iterations of clustering.
LAP passes messages between the landmark data points first and then clusters
non-landmark data points; it is a large global approximation method to speed up
clustering. Experiments are conducted on many datasets, such as random data
points, manifold subspaces, images of faces and Chinese calligraphy, and the
results demonstrate that the two approaches are feasible and practicable.
| [
"Dingyin Xia, Fei Wu, Xuqing Zhang, Yueting Zhuang",
"['Dingyin Xia' 'Fei Wu' 'Xuqing Zhang' 'Yueting Zhuang']"
] |
stat.AP cs.LG | 10.1214/10-AOAS359 | 0910.2034 | null | null | http://arxiv.org/abs/0910.2034v2 | 2010-11-10T09:02:00Z | 2009-10-11T19:36:16Z | Strategies for online inference of model-based clustering in large and
growing networks | In this paper we adapt online estimation strategies to perform model-based
clustering on large networks. Our work focuses on two algorithms, the first
based on the SAEM algorithm, and the second on variational methods. These two
strategies are compared with existing approaches on simulated and real data. We
use the method to decipher the connexion structure of the political websphere
during the US political campaign in 2008. We show that our online EM-based
algorithms offer a good trade-off between precision and speed, when estimating
parameters for mixture distributions in the context of random graphs.
| [
"['Hugo Zanghi' 'Franck Picard' 'Vincent Miele' 'Christophe Ambroise']",
"Hugo Zanghi, Franck Picard, Vincent Miele, Christophe Ambroise"
] |
math.OC cs.LG math.PR | 10.1109/TSP.2010.2062509 | 0910.2065 | null | null | http://arxiv.org/abs/0910.2065v3 | 2010-06-07T18:04:14Z | 2009-10-12T00:50:19Z | Distributed Learning in Multi-Armed Bandit with Multiple Players | We formulate and study a decentralized multi-armed bandit (MAB) problem.
There are M distributed players competing for N independent arms. Each arm,
when played, offers i.i.d. reward according to a distribution with an unknown
parameter. At each time, each player chooses one arm to play without exchanging
observations or any information with other players. Players choosing the same
arm collide, and, depending on the collision model, either no one receives
reward or the colliding players share the reward in an arbitrary way. We show
that the minimum system regret of the decentralized MAB grows with time at the
same logarithmic order as in the centralized counterpart where players act
collectively as a single entity by exchanging observations and making decisions
jointly. A decentralized policy is constructed to achieve this optimal order
while ensuring fairness among players and without assuming any pre-agreement or
information exchange among players. Based on a Time Division Fair Sharing
(TDFS) of the M best arms, the proposed policy is constructed and its order
optimality is proven under a general reward model. Furthermore, the basic
structure of the TDFS policy can be used with any order-optimal single-player
policy to achieve order optimality in the decentralized setting. We also
establish a lower bound on the system regret growth rate for a general class of
decentralized polices, to which the proposed policy belongs. This problem finds
potential applications in cognitive radio networks, multi-channel communication
systems, multi-agent systems, web search and advertising, and social networks.
| [
"['Keqin Liu' 'Qing Zhao']",
"Keqin Liu and Qing Zhao"
] |
cs.IT cs.LG math.IT math.OC | null | 0910.2240 | null | null | http://arxiv.org/pdf/0910.2240v1 | 2009-10-12T20:16:16Z | 2009-10-12T20:16:16Z | Repeated Auctions with Learning for Spectrum Access in Cognitive Radio
Networks | In this paper, spectrum access in cognitive radio networks is modeled as a
repeated auction game subject to monitoring and entry costs. For secondary
users, sensing costs are incurred as the result of primary users' activity.
Furthermore, each secondary user pays the cost of transmissions upon successful
bidding for a channel. Knowledge regarding other secondary users' activity is
limited due to the distributed nature of the network. The resulting formulation
is thus a dynamic game with incomplete information. In this paper, an efficient
bidding learning algorithm is proposed based on the outcome of past
transactions. As demonstrated through extensive simulations, the proposed
distributed scheme outperforms a myopic one-stage algorithm, and can achieve a
good balance between efficiency and fairness.
| [
"Zhu Han, Rong Zheng, Vincent H. Poor",
"['Zhu Han' 'Rong Zheng' 'Vincent H. Poor']"
] |
cs.CV cs.LG | null | 0910.2279 | null | null | http://arxiv.org/pdf/0910.2279v1 | 2009-10-13T00:54:31Z | 2009-10-13T00:54:31Z | Positive Semidefinite Metric Learning with Boosting | The learning of appropriate distance metrics is a critical problem in image
classification and retrieval. In this work, we propose a boosting-based
technique, termed \BoostMetric, for learning a Mahalanobis distance metric. One
of the primary difficulties in learning such a metric is to ensure that the
Mahalanobis matrix remains positive semidefinite. Semidefinite programming is
sometimes used to enforce this constraint, but does not scale well.
\BoostMetric is instead based on a key observation that any positive
semidefinite matrix can be decomposed into a linear positive combination of
trace-one rank-one matrices. \BoostMetric thus uses rank-one positive
semidefinite matrices as weak learners within an efficient and scalable
boosting-based learning process. The resulting method is easy to implement,
does not require tuning, and can accommodate various types of constraints.
Experiments on various datasets show that the proposed algorithm compares
favorably to those state-of-the-art methods in terms of classification accuracy
and running time.
| [
"Chunhua Shen, Junae Kim, Lei Wang, Anton van den Hengel",
"['Chunhua Shen' 'Junae Kim' 'Lei Wang' 'Anton van den Hengel']"
] |
cs.LG | null | 0910.2540 | null | null | http://arxiv.org/pdf/0910.2540v1 | 2009-10-14T07:43:03Z | 2009-10-14T07:43:03Z | Effectiveness and Limitations of Statistical Spam Filters | In this paper we discuss the techniques involved in the design of the famous
statistical spam filters that include Naive Bayes, Term Frequency-Inverse
Document Frequency, K-Nearest Neighbor, Support Vector Machine, and Bayes
Additive Regression Tree. We compare these techniques with each other in terms
of accuracy, recall, precision, etc. Further, we discuss the effectiveness and
limitations of statistical filters in filtering out various types of spam from
legitimate e-mails.
| [
"M. Tariq Banday and Tariq R. Jan",
"['M. Tariq Banday' 'Tariq R. Jan']"
] |
quant-ph cs.LG | null | 0910.3713 | null | null | http://arxiv.org/pdf/0910.3713v1 | 2009-10-19T21:55:11Z | 2009-10-19T21:55:11Z | On Learning Finite-State Quantum Sources | We examine the complexity of learning the distributions produced by
finite-state quantum sources. We show how prior techniques for learning hidden
Markov models can be adapted to the quantum generator model to find that the
analogous state of affairs holds: information-theoretically, a polynomial
number of samples suffice to approximately identify the distribution, but
computationally, the problem is as hard as learning parities with noise, a
notorious open question in computational learning theory.
| [
"Brendan Juba",
"['Brendan Juba']"
] |
cs.LG math.ST stat.TH | null | 0910.4627 | null | null | http://arxiv.org/pdf/0910.4627v1 | 2009-10-24T07:10:24Z | 2009-10-24T07:10:24Z | Self-concordant analysis for logistic regression | Most of the non-asymptotic theoretical work in regression is carried out for
the square loss, where estimators can be obtained through closed-form
expressions. In this paper, we use and extend tools from the convex
optimization literature, namely self-concordant functions, to provide simple
extensions of theoretical results for the square loss to the logistic loss. We
apply the extension techniques to logistic regression with regularization by
the $\ell_2$-norm and regularization by the $\ell_1$-norm, showing that new
results for binary classification through logistic regression can be easily
derived from corresponding results for least-squares regression.
| [
"['Francis Bach']",
"Francis Bach (INRIA Rocquencourt)"
] |
cs.LG | null | 0910.4683 | null | null | http://arxiv.org/pdf/0910.4683v2 | 2010-05-10T23:01:30Z | 2009-10-24T22:40:40Z | Competing with Gaussian linear experts | We study the problem of online regression. We prove a theoretical bound on
the square loss of Ridge Regression. We do not make any assumptions about input
vectors or outcomes. We also show that Bayesian Ridge Regression can be thought
of as an online algorithm competing with all the Gaussian linear experts.
| [
"Fedor Zhdanov and Vladimir Vovk",
"['Fedor Zhdanov' 'Vladimir Vovk']"
] |
cs.NA cs.LG | 10.1016/j.trc.2012.12.007 | 0910.5260 | null | null | http://arxiv.org/abs/0910.5260v2 | 2009-11-03T23:35:13Z | 2009-10-27T22:19:31Z | A Gradient Descent Algorithm on the Grassman Manifold for Matrix
Completion | We consider the problem of reconstructing a low-rank matrix from a small
subset of its entries. In this paper, we describe the implementation of an
efficient algorithm called OptSpace, based on singular value decomposition
followed by local manifold optimization, for solving the low-rank matrix
completion problem. It has been shown that if the number of revealed entries is
large enough, the output of singular value decomposition gives a good estimate
for the original matrix, so that local optimization reconstructs the correct
matrix with high probability. We present numerical results which show that this
algorithm can reconstruct the low rank matrix exactly from a very small subset
of its entries. We further study the robustness of the algorithm with respect
to noise, and its performance on actual collaborative filtering datasets.
| [
"Raghunandan H. Keshavan, Sewoong Oh",
"['Raghunandan H. Keshavan' 'Sewoong Oh']"
] |
cs.CV astro-ph.EP astro-ph.IM cs.LG stat.ML | 10.1017/S1473550409990358 | 0910.5454 | null | null | http://arxiv.org/abs/0910.5454v1 | 2009-10-28T18:26:39Z | 2009-10-28T18:26:39Z | The Cyborg Astrobiologist: Testing a Novelty-Detection Algorithm on Two
Mobile Exploration Systems at Rivas Vaciamadrid in Spain and at the Mars
Desert Research Station in Utah | (ABRIDGED) In previous work, two platforms have been developed for testing
computer-vision algorithms for robotic planetary exploration (McGuire et al.
2004b,2005; Bartolo et al. 2007). The wearable-computer platform has been
tested at geological and astrobiological field sites in Spain (Rivas
Vaciamadrid and Riba de Santiuste), and the phone-camera has been tested at a
geological field site in Malta. In this work, we (i) apply a Hopfield
neural-network algorithm for novelty detection based upon color, (ii) integrate
a field-capable digital microscope on the wearable computer platform, (iii)
test this novelty detection with the digital microscope at Rivas Vaciamadrid,
(iv) develop a Bluetooth communication mode for the phone-camera platform, in
order to allow access to a mobile processing computer at the field sites, and
(v) test the novelty detection on the Bluetooth-enabled phone-camera connected
to a netbook computer at the Mars Desert Research Station in Utah. This systems
engineering and field testing have together allowed us to develop a real-time
computer-vision system that is capable, for example, of identifying lichens as
novel within a series of images acquired in semi-arid desert environments. We
acquired sequences of images of geologic outcrops in Utah and Spain consisting
of various rock types and colors to test this algorithm. The algorithm robustly
recognized previously-observed units by their color, while requiring only a
single image or a few images to learn colors as familiar, demonstrating its
fast learning capability.
| [
"['P. C. McGuire' 'C. Gross' 'L. Wendt' 'A. Bonnici' 'V. Souza-Egipsy'\n 'J. Ormo' 'E. Diaz-Martinez' 'B. H. Foing' 'R. Bose' 'S. Walter'\n 'M. Oesker' 'J. Ontrup' 'R. Haschke' 'H. Ritter']",
"P.C. McGuire, C. Gross, L. Wendt, A. Bonnici, V. Souza-Egipsy, J.\n Ormo, E. Diaz-Martinez, B.H. Foing, R. Bose, S. Walter, M. Oesker, J. Ontrup,\n R. Haschke, H. Ritter"
] |
cs.LG | null | 0910.5461 | null | null | http://arxiv.org/pdf/0910.5461v1 | 2009-10-28T18:46:41Z | 2009-10-28T18:46:41Z | Anomaly Detection with Score functions based on Nearest Neighbor Graphs | We propose a novel non-parametric adaptive anomaly detection algorithm for
high dimensional data based on score functions derived from nearest neighbor
graphs on $n$-point nominal data. Anomalies are declared whenever the score of
a test sample falls below $\alpha$, which is supposed to be the desired false
alarm level. The resulting anomaly detector is shown to be asymptotically
optimal in that it is uniformly most powerful for the specified false alarm
level, $\alpha$, for the case when the anomaly density is a mixture of the
nominal and a known density. Our algorithm is computationally efficient, being
linear in dimension and quadratic in data size. It does not require choosing
complicated tuning parameters or function approximation classes and it can
adapt to local structure such as local change in dimensionality. We demonstrate
the algorithm on both artificial and real data sets in high dimensional feature
spaces.
| [
"['Manqi Zhao' 'Venkatesh Saligrama']",
"Manqi Zhao and Venkatesh Saligrama"
] |
stat.ML cond-mat.stat-mech cs.LG | null | 0910.5761 | null | null | http://arxiv.org/pdf/0910.5761v1 | 2009-10-30T01:10:44Z | 2009-10-30T01:10:44Z | Which graphical models are difficult to learn? | We consider the problem of learning the structure of Ising models (pairwise
binary Markov random fields) from i.i.d. samples. While several methods have
been proposed to accomplish this task, their relative merits and limitations
remain somewhat obscure. By analyzing a number of concrete examples, we show
that low-complexity algorithms systematically fail when the Markov random field
develops long-range correlations. More precisely, this phenomenon appears to be
related to the Ising model phase transition (although it does not coincide with
it).
| [
"Jose Bento, Andrea Montanari",
"['Jose Bento' 'Andrea Montanari']"
] |
cs.LG cs.CV cs.IR | null | 0910.5932 | null | null | http://arxiv.org/pdf/0910.5932v1 | 2009-10-30T18:19:03Z | 2009-10-30T18:19:03Z | Metric and Kernel Learning using a Linear Transformation | Metric and kernel learning are important in several machine learning
applications. However, most existing metric learning algorithms are limited to
learning metrics over low-dimensional data, while existing kernel learning
algorithms are often limited to the transductive setting and do not generalize
to new data points. In this paper, we study metric learning as a problem of
learning a linear transformation of the input data. We show that for
high-dimensional data, a particular framework for learning a linear
transformation of the data based on the LogDet divergence can be efficiently
kernelized to learn a metric (or equivalently, a kernel function) over an
arbitrarily high dimensional space. We further demonstrate that a wide class of
convex loss functions for learning linear transformations can similarly be
kernelized, thereby considerably expanding the potential applications of metric
learning. We demonstrate our learning approach by applying it to large-scale
real world problems in computer vision and text mining.
| [
"Prateek Jain, Brian Kulis, Jason V. Davis, Inderjit S. Dhillon",
"['Prateek Jain' 'Brian Kulis' 'Jason V. Davis' 'Inderjit S. Dhillon']"
] |
cs.LG stat.ML | null | 0911.0054 | null | null | http://arxiv.org/pdf/0911.0054v2 | 2015-05-16T22:45:35Z | 2009-10-31T02:56:18Z | Learning Exponential Families in High-Dimensions: Strong Convexity and
Sparsity | The versatility of exponential families, along with their attendant convexity
properties, make them a popular and effective statistical model. A central
issue is learning these models in high-dimensions, such as when there is some
sparsity pattern of the optimal parameter. This work characterizes a certain
strong convexity property of general exponential families, which allow their
generalization ability to be quantified. In particular, we show how this
property can be used to analyze generic exponential families under L_1
regularization.
| [
"['Sham M. Kakade' 'Ohad Shamir' 'Karthik Sridharan' 'Ambuj Tewari']",
"Sham M. Kakade, Ohad Shamir, Karthik Sridharan, Ambuj Tewari"
] |
cs.LG | null | 0911.0225 | null | null | http://arxiv.org/pdf/0911.0225v1 | 2009-11-02T19:53:01Z | 2009-11-02T19:53:01Z | A Mirroring Theorem and its Application to a New Method of Unsupervised
Hierarchical Pattern Classification | In this paper, we prove a crucial theorem called Mirroring Theorem which
affirms that given a collection of samples with enough information in it such
that it can be classified into classes and subclasses then (i) There exists a
mapping which classifies and subclassifies these samples (ii) There exists a
hierarchical classifier which can be constructed by using Mirroring Neural
Networks (MNNs) in combination with a clustering algorithm that can approximate
this mapping. Thus, the proof of the Mirroring theorem provides a theoretical
basis for the existence and a practical feasibility of constructing
hierarchical classifiers, given the maps. Our proposed Mirroring Theorem can
also be considered as an extension to Kolmogrovs theorem in providing a
realistic solution for unsupervised classification. The techniques we develop,
are general in nature and have led to the construction of learning machines
which are (i) tree like in structure, (ii) modular (iii) with each module
running on a common algorithm (tandem algorithm) and (iv) selfsupervised. We
have actually built the architecture, developed the tandem algorithm of such a
hierarchical classifier and demonstrated it on an example problem.
| [
"['Dasika Ratna Deepthi' 'K. Eswaran']",
"Dasika Ratna Deepthi, K. Eswaran"
] |
cs.LG cs.AI | null | 0911.0460 | null | null | http://arxiv.org/pdf/0911.0460v2 | 2009-11-04T08:55:28Z | 2009-11-03T08:17:05Z | Feature-Weighted Linear Stacking | Ensemble methods, such as stacking, are designed to boost predictive accuracy
by blending the predictions of multiple machine learning models. Recent work
has shown that the use of meta-features, additional inputs describing each
example in a dataset, can boost the performance of ensemble methods, but the
greatest reported gains have come from nonlinear procedures requiring
significant tuning and training time. Here, we present a linear technique,
Feature-Weighted Linear Stacking (FWLS), that incorporates meta-features for
improved accuracy while retaining the well-known virtues of linear regression
regarding speed, stability, and interpretability. FWLS combines model
predictions linearly using coefficients that are themselves linear functions of
meta-features. This technique was a key facet of the solution of the second
place team in the recently concluded Netflix Prize competition. Significant
increases in accuracy over standard linear stacking are demonstrated on the
Netflix Prize collaborative filtering dataset.
| [
"Joseph Sill, Gabor Takacs, Lester Mackey, David Lin",
"['Joseph Sill' 'Gabor Takacs' 'Lester Mackey' 'David Lin']"
] |
cs.LG physics.data-an quant-ph | 10.1016/j.nuclphysbps.2010.02.009 | 0911.0462 | null | null | http://arxiv.org/abs/0911.0462v1 | 2009-11-03T00:27:36Z | 2009-11-03T00:27:36Z | Strange Bedfellows: Quantum Mechanics and Data Mining | Last year, in 2008, I gave a talk titled {\it Quantum Calisthenics}. This
year I am going to tell you about how the work I described then has spun off
into a most unlikely direction. What I am going to talk about is how one maps
the problem of finding clusters in a given data set into a problem in quantum
mechanics. I will then use the tricks I described to let quantum evolution lets
the clusters come together on their own.
| [
"['Marvin Weinstein']",
"Marvin Weinstein"
] |
cs.NE cs.LG | null | 0911.0485 | null | null | http://arxiv.org/pdf/0911.0485v1 | 2009-11-03T04:07:19Z | 2009-11-03T04:07:19Z | Novel Intrusion Detection using Probabilistic Neural Network and
Adaptive Boosting | This article applies Machine Learning techniques to solve Intrusion Detection
problems within computer networks. Due to complex and dynamic nature of
computer networks and hacking techniques, detecting malicious activities
remains a challenging task for security experts, that is, currently available
defense systems suffer from low detection capability and high number of false
alarms. To overcome such performance limitations, we propose a novel Machine
Learning algorithm, namely Boosted Subspace Probabilistic Neural Network
(BSPNN), which integrates an adaptive boosting technique and a semi parametric
neural network to obtain good tradeoff between accuracy and generality. As the
result, learning bias and generalization variance can be significantly
minimized. Substantial experiments on KDD 99 intrusion benchmark indicate that
our model outperforms other state of the art learning algorithms, with
significantly improved detection accuracy, minimal false alarms and relatively
small computational complexity.
| [
"['Tich Phuoc Tran' 'Longbing Cao' 'Dat Tran' 'Cuong Duc Nguyen']",
"Tich Phuoc Tran, Longbing Cao, Dat Tran, Cuong Duc Nguyen"
] |
q-bio.PE cs.LG q-bio.QM | null | 0911.0645 | null | null | http://arxiv.org/pdf/0911.0645v2 | 2009-11-22T00:09:42Z | 2009-11-03T18:43:43Z | Bayes estimators for phylogenetic reconstruction | Tree reconstruction methods are often judged by their accuracy, measured by
how close they get to the true tree. Yet most reconstruction methods like ML do
not explicitly maximize this accuracy. To address this problem, we propose a
Bayesian solution. Given tree samples, we propose finding the tree estimate
which is closest on average to the samples. This ``median'' tree is known as
the Bayes estimator (BE). The BE literally maximizes posterior expected
accuracy, measured in terms of closeness (distance) to the true tree. We
discuss a unified framework of BE trees, focusing especially on tree distances
which are expressible as squared euclidean distances. Notable examples include
Robinson--Foulds distance, quartet distance, and squared path difference. Using
simulated data, we show Bayes estimators can be efficiently computed in
practice by hill climbing. We also show that Bayes estimators achieve higher
accuracy, compared to maximum likelihood and neighbor joining.
| [
"['Peter Huggins' 'Wenbin Li' 'David Haws' 'Thomas Friedrich' 'Jinze Liu'\n 'Ruriko Yoshida']",
"Peter Huggins, Wenbin Li, David Haws, Thomas Friedrich, Jinze Liu,\n Ruriko Yoshida"
] |
cs.DS cs.LG | null | 0911.1174 | null | null | http://arxiv.org/pdf/0911.1174v1 | 2009-11-06T03:52:56Z | 2009-11-06T03:52:56Z | Sharp Dichotomies for Regret Minimization in Metric Spaces | The Lipschitz multi-armed bandit (MAB) problem generalizes the classical
multi-armed bandit problem by assuming one is given side information consisting
of a priori upper bounds on the difference in expected payoff between certain
pairs of strategies. Classical results of (Lai and Robbins 1985) and (Auer et
al. 2002) imply a logarithmic regret bound for the Lipschitz MAB problem on
finite metric spaces. Recent results on continuum-armed bandit problems and
their generalizations imply lower bounds of $\sqrt{t}$, or stronger, for many
infinite metric spaces such as the unit interval. Is this dichotomy universal?
We prove that the answer is yes: for every metric space, the optimal regret of
a Lipschitz MAB algorithm is either bounded above by any $f\in \omega(\log t)$,
or bounded below by any $g\in o(\sqrt{t})$. Perhaps surprisingly, this
dichotomy does not coincide with the distinction between finite and infinite
metric spaces; instead it depends on whether the completion of the metric space
is compact and countable. Our proof connects upper and lower bound techniques
in online learning with classical topological notions such as perfect sets and
the Cantor-Bendixson theorem. Among many other results, we show a similar
dichotomy for the "full-feedback" (a.k.a., "best-expert") version.
| [
"Robert Kleinberg and Aleksandrs Slivkins",
"['Robert Kleinberg' 'Aleksandrs Slivkins']"
] |
cs.AI cs.LG | null | 0911.1386 | null | null | http://arxiv.org/pdf/0911.1386v1 | 2009-11-07T02:52:53Z | 2009-11-07T02:52:53Z | Machine Learning: When and Where the Horses Went Astray? | Machine Learning is usually defined as a subfield of AI, which is busy with
information extraction from raw data sets. Despite of its common acceptance and
widespread recognition, this definition is wrong and groundless. Meaningful
information does not belong to the data that bear it. It belongs to the
observers of the data and it is a shared agreement and a convention among them.
Therefore, this private information cannot be extracted from the data by any
means. Therefore, all further attempts of Machine Learning apologists to
justify their funny business are inappropriate.
| [
"['Emanuel Diamant']",
"Emanuel Diamant"
] |
cs.DS cond-mat.stat-mech cs.DM cs.LG cs.NA math.OC | 10.1088/1751-8113/43/24/242002 | 0911.1419 | null | null | http://arxiv.org/abs/0911.1419v2 | 2010-05-02T15:58:46Z | 2009-11-08T04:15:01Z | Belief Propagation and Loop Calculus for the Permanent of a Non-Negative
Matrix | We consider computation of permanent of a positive $(N\times N)$ non-negative
matrix, $P=(P_i^j|i,j=1,\cdots,N)$, or equivalently the problem of weighted
counting of the perfect matchings over the complete bipartite graph $K_{N,N}$.
The problem is known to be of likely exponential complexity. Stated as the
partition function $Z$ of a graphical model, the problem allows exact Loop
Calculus representation [Chertkov, Chernyak '06] in terms of an interior
minimum of the Bethe Free Energy functional over non-integer doubly stochastic
matrix of marginal beliefs, $\beta=(\beta_i^j|i,j=1,\cdots,N)$, also
correspondent to a fixed point of the iterative message-passing algorithm of
the Belief Propagation (BP) type. Our main result is an explicit expression of
the exact partition function (permanent) in terms of the matrix of BP
marginals, $\beta$, as $Z=\mbox{Perm}(P)=Z_{BP}
\mbox{Perm}(\beta_i^j(1-\beta_i^j))/\prod_{i,j}(1-\beta_i^j)$, where $Z_{BP}$
is the BP expression for the permanent stated explicitly in terms if $\beta$.
We give two derivations of the formula, a direct one based on the Bethe Free
Energy and an alternative one combining the Ihara graph-$\zeta$ function and
the Loop Calculus approaches. Assuming that the matrix $\beta$ of the Belief
Propagation marginals is calculated, we provide two lower bounds and one
upper-bound to estimate the multiplicative term. Two complementary lower bounds
are based on the Gurvits-van der Waerden theorem and on a relation between the
modified permanent and determinant respectively.
| [
"['Yusuke Watanabe' 'Michael Chertkov']",
"Yusuke Watanabe and Michael Chertkov"
] |
physics.data-an cond-mat.stat-mech cs.LG nlin.CD stat.ME | null | 0911.2381 | null | null | http://arxiv.org/pdf/0911.2381v1 | 2009-11-12T13:08:20Z | 2009-11-12T13:08:20Z | Analytical Determination of Fractal Structure in Stochastic Time Series | Current methods for determining whether a time series exhibits fractal
structure (FS) rely on subjective assessments on estimators of the Hurst
exponent (H). Here, I introduce the Bayesian Assessment of Scaling, an
analytical framework for drawing objective and accurate inferences on the FS of
time series. The technique exploits the scaling property of the diffusion
associated to a time series. The resulting criterion is simple to compute and
represents an accurate characterization of the evidence supporting different
hypotheses on the scaling regime of a time series. Additionally, a closed-form
Maximum Likelihood estimator of H is derived from the criterion, and this
estimator outperforms the best available estimators.
| [
"['Fermín Moscoso del Prado Martín']",
"Ferm\\'in Moscoso del Prado Mart\\'in"
] |
cs.LG | 10.1109/TIT.2012.2201375 | 0911.2904 | null | null | http://arxiv.org/abs/0911.2904v4 | 2012-03-13T16:11:21Z | 2009-11-15T18:43:10Z | Sequential anomaly detection in the presence of noise and limited
feedback | This paper describes a methodology for detecting anomalies from sequentially
observed and potentially noisy data. The proposed approach consists of two main
elements: (1) {\em filtering}, or assigning a belief or likelihood to each
successive measurement based upon our ability to predict it from previous noisy
observations, and (2) {\em hedging}, or flagging potential anomalies by
comparing the current belief against a time-varying and data-adaptive
threshold. The threshold is adjusted based on the available feedback from an
end user. Our algorithms, which combine universal prediction with recent work
on online convex programming, do not require computing posterior distributions
given all current observations and involve simple primal-dual parameter
updates. At the heart of the proposed approach lie exponential-family models
which can be used in a wide variety of contexts and applications, and which
yield methods that achieve sublinear per-round regret against both static and
slowly varying product distributions with marginals drawn from the same
exponential family. Moreover, the regret against static distributions coincides
with the minimax value of the corresponding online strongly convex game. We
also prove bounds on the number of mistakes made during the hedging step
relative to the best offline choice of the threshold with access to all
estimated beliefs and feedback signals. We validate the theory on synthetic
data drawn from a time-varying distribution over binary vectors of high
dimensionality, as well as on the Enron email dataset.
| [
"['Maxim Raginsky' 'Rebecca Willett' 'Corinne Horn' 'Jorge Silva'\n 'Roummel Marcia']",
"Maxim Raginsky, Rebecca Willett, Corinne Horn, Jorge Silva, Roummel\n Marcia"
] |
cs.DS cs.LG | null | 0911.2974 | null | null | http://arxiv.org/pdf/0911.2974v3 | 2014-04-09T03:44:37Z | 2009-11-16T16:39:33Z | A Dynamic Near-Optimal Algorithm for Online Linear Programming | A natural optimization model that formulates many online resource allocation
and revenue management problems is the online linear program (LP) in which the
constraint matrix is revealed column by column along with the corresponding
objective coefficient. In such a model, a decision variable has to be set each
time a column is revealed without observing the future inputs and the goal is
to maximize the overall objective function. In this paper, we provide a
near-optimal algorithm for this general class of online problems under the
assumption of random order of arrival and some mild conditions on the size of
the LP right-hand-side input. Specifically, our learning-based algorithm works
by dynamically updating a threshold price vector at geometric time intervals,
where the dual prices learned from the revealed columns in the previous period
are used to determine the sequential decisions in the current period. Due to
the feature of dynamic learning, the competitiveness of our algorithm improves
over the past study of the same problem. We also present a worst-case example
showing that the performance of our algorithm is near-optimal.
| [
"['Shipra Agrawal' 'Zizhuo Wang' 'Yinyu Ye']",
"Shipra Agrawal, Zizhuo Wang, Yinyu Ye"
] |
cs.NE cs.LG | null | 0911.3298 | null | null | http://arxiv.org/pdf/0911.3298v1 | 2009-11-17T13:17:05Z | 2009-11-17T13:17:05Z | Understanding the Principles of Recursive Neural networks: A Generative
Approach to Tackle Model Complexity | Recursive Neural Networks are non-linear adaptive models that are able to
learn deep structured information. However, these models have not yet been
broadly accepted. This fact is mainly due to its inherent complexity. In
particular, not only for being extremely complex information processing models,
but also because of a computational expensive learning phase. The most popular
training method for these models is back-propagation through the structure.
This algorithm has been revealed not to be the most appropriate for structured
processing due to problems of convergence, while more sophisticated training
methods enhance the speed of convergence at the expense of increasing
significantly the computational cost. In this paper, we firstly perform an
analysis of the underlying principles behind these models aimed at
understanding their computational power. Secondly, we propose an approximate
second order stochastic learning algorithm. The proposed algorithm dynamically
adapts the learning rate throughout the training phase of the network without
incurring excessively expensive computational effort. The algorithm operates in
both on-line and batch modes. Furthermore, the resulting learning scheme is
robust against the vanishing gradients problem. The advantages of the proposed
algorithm are demonstrated with a real-world application example.
| [
"Alejandro Chinea",
"['Alejandro Chinea']"
] |
cs.LG | 10.1109/CTS.2009.5067478 | 0911.3304 | null | null | http://arxiv.org/abs/0911.3304v1 | 2009-11-17T13:35:40Z | 2009-11-17T13:35:40Z | Keystroke Dynamics Authentication For Collaborative Systems | We present in this paper a study on the ability and the benefits of using a
keystroke dynamics authentication method for collaborative systems.
Authentication is a challenging issue in order to guarantee the security of use
of collaborative systems during the access control step. Many solutions exist
in the state of the art such as the use of one time passwords or smart-cards.
We focus in this paper on biometric based solutions that do not necessitate any
additional sensor. Keystroke dynamics is an interesting solution as it uses
only the keyboard and is invisible for users. Many methods have been published
in this field. We make a comparative study of many of them considering the
operational constraints of use for collaborative systems.
| [
"['Romain Giot' 'Mohamad El-Abed' 'Christophe Rosenberger']",
"Romain Giot (GREYC), Mohamad El-Abed (GREYC), Christophe Rosenberger\n (GREYC)"
] |
cs.LG math.CO math.GT stat.ML | null | 0911.3633 | null | null | http://arxiv.org/pdf/0911.3633v1 | 2009-11-18T19:22:09Z | 2009-11-18T19:22:09Z | A Geometric Approach to Sample Compression | The Sample Compression Conjecture of Littlestone & Warmuth has remained
unsolved for over two decades. This paper presents a systematic geometric
investigation of the compression of finite maximum concept classes. Simple
arrangements of hyperplanes in Hyperbolic space, and Piecewise-Linear
hyperplane arrangements, are shown to represent maximum classes, generalizing
the corresponding Euclidean result. A main result is that PL arrangements can
be swept by a moving hyperplane to unlabeled d-compress any finite maximum
class, forming a peeling scheme as conjectured by Kuzmin & Warmuth. A corollary
is that some d-maximal classes cannot be embedded into any maximum class of VC
dimension d+k, for any constant k. The construction of the PL sweeping involves
Pachner moves on the one-inclusion graph, corresponding to moves of a
hyperplane across the intersection of d other hyperplanes. This extends the
well known Pachner moves for triangulations to cubical complexes.
| [
"Benjamin I. P. Rubinstein and J. Hyam Rubinstein",
"['Benjamin I. P. Rubinstein' 'J. Hyam Rubinstein']"
] |
stat.ML cs.CL cs.LG stat.AP | 10.1109/JSTSP.2010.2076050 | 0911.3944 | null | null | http://arxiv.org/abs/0911.3944v1 | 2009-11-20T01:30:36Z | 2009-11-20T01:30:36Z | Likelihood-based semi-supervised model selection with applications to
speech processing | In conventional supervised pattern recognition tasks, model selection is
typically accomplished by minimizing the classification error rate on a set of
so-called development data, subject to ground-truth labeling by human experts
or some other means. In the context of speech processing systems and other
large-scale practical applications, however, such labeled development data are
typically costly and difficult to obtain. This article proposes an alternative
semi-supervised framework for likelihood-based model selection that leverages
unlabeled data by using trained classifiers representing each model to
automatically generate putative labels. The errors that result from this
automatic labeling are shown to be amenable to results from robust statistics,
which in turn provide for minimax-optimal censored likelihood ratio tests that
recover the nonparametric sign test as a limiting case. This approach is then
validated experimentally using a state-of-the-art automatic speech recognition
system to select between candidate word pronunciations using unlabeled speech
data that only potentially contain instances of the words under test. Results
provide supporting evidence for the utility of this approach, and suggest that
it may also find use in other applications of machine learning.
| [
"Christopher M. White, Sanjeev P. Khudanpur, and Patrick J. Wolfe",
"['Christopher M. White' 'Sanjeev P. Khudanpur' 'Patrick J. Wolfe']"
] |
stat.ML cs.LG stat.ME | null | 0911.4046 | null | null | http://arxiv.org/pdf/0911.4046v3 | 2011-01-02T07:04:21Z | 2009-11-20T13:44:28Z | Super-Linear Convergence of Dual Augmented-Lagrangian Algorithm for
Sparsity Regularized Estimation | We analyze the convergence behaviour of a recently proposed algorithm for
regularized estimation called Dual Augmented Lagrangian (DAL). Our analysis is
based on a new interpretation of DAL as a proximal minimization algorithm. We
theoretically show under some conditions that DAL converges super-linearly in a
non-asymptotic and global sense. Due to a special modelling of sparse
estimation problems in the context of machine learning, the assumptions we make
are milder and more natural than those made in conventional analysis of
augmented Lagrangian algorithms. In addition, the new interpretation enables us
to generalize DAL to wide varieties of sparse estimation problems. We
experimentally confirm our analysis in a large scale $\ell_1$-regularized
logistic regression problem and extensively compare the efficiency of DAL
algorithm to previously proposed algorithms on both synthetic and benchmark
datasets.
| [
"Ryota Tomioka, Taiji Suzuki, Masashi Sugiyama",
"['Ryota Tomioka' 'Taiji Suzuki' 'Masashi Sugiyama']"
] |
cs.LG cs.HC | null | 0911.4262 | null | null | http://arxiv.org/pdf/0911.4262v1 | 2009-11-22T16:01:09Z | 2009-11-22T16:01:09Z | Towards Industrialized Conception and Production of Serious Games | Serious Games (SGs) have experienced a tremendous outburst these last years.
Video game companies have been producing fun, user-friendly SGs, but their
educational value has yet to be proven. Meanwhile, cognition research scientist
have been developing SGs in such a way as to guarantee an educational gain, but
the fun and attractive characteristics featured often would not meet the
public's expectations. The ideal SG must combine these two aspects while still
being economically viable. In this article, we propose a production chain model
to efficiently conceive and produce SGs that are certified for their
educational gain and fun qualities. Each step of this chain will be described
along with the human actors, the tools and the documents that intervene.
| [
"['Iza Marfisi-Schottman' 'Aymen Sghaier' 'Sébastien George'\n 'Franck Tarpin-Bernard' 'Patrick Prévôt']",
"Iza Marfisi-Schottman (LIESP), Aymen Sghaier (LIESP), S\\'ebastien\n George (LIESP), Franck Tarpin-Bernard (LIESP), Patrick Pr\\'ev\\^ot (LIESP)"
] |
cs.LG | null | 0911.4863 | null | null | http://arxiv.org/pdf/0911.4863v2 | 2011-05-13T01:52:49Z | 2009-11-25T14:26:54Z | Statistical exponential families: A digest with flash cards | This document describes concisely the ubiquitous class of exponential family
distributions met in statistics. The first part recalls definitions and
summarizes main properties and duality with Bregman divergences (all proofs are
skipped). The second part lists decompositions and related formula of common
exponential family distributions. We recall the Fisher-Rao-Riemannian
geometries and the dual affine connection information geometries of statistical
manifolds. It is intended to maintain and update this document and catalog by
adding new distribution items.
| [
"['Frank Nielsen' 'Vincent Garcia']",
"Frank Nielsen and Vincent Garcia"
] |
cs.AI cs.LG | null | 0911.5104 | null | null | http://arxiv.org/pdf/0911.5104v2 | 2009-12-30T23:34:14Z | 2009-11-26T15:52:33Z | A Bayesian Rule for Adaptive Control based on Causal Interventions | Explaining adaptive behavior is a central problem in artificial intelligence
research. Here we formalize adaptive agents as mixture distributions over
sequences of inputs and outputs (I/O). Each distribution of the mixture
constitutes a `possible world', but the agent does not know which of the
possible worlds it is actually facing. The problem is to adapt the I/O stream
in a way that is compatible with the true world. A natural measure of
adaptation can be obtained by the Kullback-Leibler (KL) divergence between the
I/O distribution of the true world and the I/O distribution expected by the
agent that is uncertain about possible worlds. In the case of pure input
streams, the Bayesian mixture provides a well-known solution for this problem.
We show, however, that in the case of I/O streams this solution breaks down,
because outputs are issued by the agent itself and require a different
probabilistic syntax as provided by intervention calculus. Based on this
calculus, we obtain a Bayesian control rule that allows modeling adaptive
behavior with mixture distributions over I/O streams. This rule might allow for
a novel approach to adaptive control based on a minimum KL-principle.
| [
"['Pedro A. Ortega' 'Daniel A. Braun']",
"Pedro A. Ortega, Daniel A. Braun"
] |
cs.CV cs.AI cs.LG cs.NE | null | 0911.5372 | null | null | http://arxiv.org/pdf/0911.5372v1 | 2009-11-28T04:58:38Z | 2009-11-28T04:58:38Z | Maximin affinity learning of image segmentation | Images can be segmented by first using a classifier to predict an affinity
graph that reflects the degree to which image pixels must be grouped together
and then partitioning the graph to yield a segmentation. Machine learning has
been applied to the affinity classifier to produce affinity graphs that are
good in the sense of minimizing edge misclassification rates. However, this
error measure is only indirectly related to the quality of segmentations
produced by ultimately partitioning the affinity graph. We present the first
machine learning algorithm for training a classifier to produce affinity graphs
that are good in the sense of producing segmentations that directly minimize
the Rand index, a well known segmentation performance measure. The Rand index
measures segmentation performance by quantifying the classification of the
connectivity of image pixel pairs after segmentation. By using the simple graph
partitioning algorithm of finding the connected components of the thresholded
affinity graph, we are able to train an affinity classifier to directly
minimize the Rand index of segmentations resulting from the graph partitioning.
Our learning algorithm corresponds to the learning of maximin affinities
between image pixel pairs, which are predictive of the pixel-pair connectivity.
| [
"['Srinivas C. Turaga' 'Kevin L. Briggman' 'Moritz Helmstaedter'\n 'Winfried Denk' 'H. Sebastian Seung']",
"Srinivas C. Turaga, Kevin L. Briggman, Moritz Helmstaedter, Winfried\n Denk, H. Sebastian Seung"
] |
cs.CL cs.LG | null | 0911.5703 | null | null | http://arxiv.org/pdf/0911.5703v1 | 2009-11-30T18:15:35Z | 2009-11-30T18:15:35Z | Hierarchies in Dictionary Definition Space | A dictionary defines words in terms of other words. Definitions can tell you
the meanings of words you don't know, but only if you know the meanings of the
defining words. How many words do you need to know (and which ones) in order to
be able to learn all the rest from definitions? We reduced dictionaries to
their "grounding kernels" (GKs), about 10% of the dictionary, from which all
the other words could be defined. The GK words turned out to have
psycholinguistic correlates: they were learned at an earlier age and more
concrete than the rest of the dictionary. But one can compress still more: the
GK turns out to have internal structure, with a strongly connected "kernel
core" (KC) and a surrounding layer, from which a hierarchy of definitional
distances can be derived, all the way out to the periphery of the full
dictionary. These definitional distances, too, are correlated with
psycholinguistic variables (age of acquisition, concreteness, imageability,
oral and written frequency) and hence perhaps with the "mental lexicon" in each
of our heads.
| [
"Olivier Picard, Alexandre Blondin-Masse, Stevan Harnad, Odile\n Marcotte, Guillaume Chicoisne and Yassine Gargouri",
"['Olivier Picard' 'Alexandre Blondin-Masse' 'Stevan Harnad'\n 'Odile Marcotte' 'Guillaume Chicoisne' 'Yassine Gargouri']"
] |
cs.LG cs.CR cs.DB | null | 0911.5708 | null | null | http://arxiv.org/pdf/0911.5708v1 | 2009-11-30T20:34:45Z | 2009-11-30T20:34:45Z | Learning in a Large Function Space: Privacy-Preserving Mechanisms for
SVM Learning | Several recent studies in privacy-preserving learning have considered the
trade-off between utility or risk and the level of differential privacy
guaranteed by mechanisms for statistical query processing. In this paper we
study this trade-off in private Support Vector Machine (SVM) learning. We
present two efficient mechanisms, one for the case of finite-dimensional
feature mappings and one for potentially infinite-dimensional feature mappings
with translation-invariant kernels. For the case of translation-invariant
kernels, the proposed mechanism minimizes regularized empirical risk in a
random Reproducing Kernel Hilbert Space whose kernel uniformly approximates the
desired kernel with high probability. This technique, borrowed from large-scale
learning, allows the mechanism to respond with a finite encoding of the
classifier, even when the function class is of infinite VC dimension.
Differential privacy is established using a proof technique from algorithmic
stability. Utility--the mechanism's response function is pointwise
epsilon-close to non-private SVM with probability 1-delta--is proven by
appealing to the smoothness of regularized empirical risk minimization with
respect to small perturbations to the feature mapping. We conclude with a lower
bound on the optimal differential privacy of the SVM. This negative result
states that for any delta, no mechanism can be simultaneously
(epsilon,delta)-useful and beta-differentially private for small epsilon and
small beta.
| [
"['Benjamin I. P. Rubinstein' 'Peter L. Bartlett' 'Ling Huang' 'Nina Taft']",
"Benjamin I. P. Rubinstein, Peter L. Bartlett, Ling Huang, Nina Taft"
] |
cs.LG cs.AI cs.CR cs.DB | null | 0912.0071 | null | null | http://arxiv.org/pdf/0912.0071v5 | 2011-02-16T22:35:55Z | 2009-12-01T04:35:44Z | Differentially Private Empirical Risk Minimization | Privacy-preserving machine learning algorithms are crucial for the
increasingly common setting in which personal data, such as medical or
financial records, are analyzed. We provide general techniques to produce
privacy-preserving approximations of classifiers learned via (regularized)
empirical risk minimization (ERM). These algorithms are private under the
$\epsilon$-differential privacy definition due to Dwork et al. (2006). First we
apply the output perturbation ideas of Dwork et al. (2006), to ERM
classification. Then we propose a new method, objective perturbation, for
privacy-preserving machine learning algorithm design. This method entails
perturbing the objective function before optimizing over classifiers. If the
loss and regularizer satisfy certain convexity and differentiability criteria,
we prove theoretical results showing that our algorithms preserve privacy, and
provide generalization bounds for linear and nonlinear kernels. We further
present a privacy-preserving technique for tuning the parameters in general
machine learning algorithms, thereby providing end-to-end privacy guarantees
for the training process. We apply these results to produce privacy-preserving
analogues of regularized logistic regression and support vector machines. We
obtain encouraging results from evaluating their performance on real
demographic and benchmark data sets. Our results show that both theoretically
and empirically, objective perturbation is superior to the previous
state-of-the-art, output perturbation, in managing the inherent tradeoff
between privacy and learning performance.
| [
"['Kamalika Chaudhuri' 'Claire Monteleoni' 'Anand D. Sarwate']",
"Kamalika Chaudhuri, Claire Monteleoni, Anand D. Sarwate"
] |
cs.LG | null | 0912.0086 | null | null | http://arxiv.org/pdf/0912.0086v1 | 2009-12-01T19:10:46Z | 2009-12-01T19:10:46Z | Learning Mixtures of Gaussians using the k-means Algorithm | One of the most popular algorithms for clustering in Euclidean space is the
$k$-means algorithm; $k$-means is difficult to analyze mathematically, and few
theoretical guarantees are known about it, particularly when the data is {\em
well-clustered}. In this paper, we attempt to fill this gap in the literature
by analyzing the behavior of $k$-means on well-clustered data. In particular,
we study the case when each cluster is distributed as a different Gaussian --
or, in other words, when the input comes from a mixture of Gaussians.
We analyze three aspects of the $k$-means algorithm under this assumption.
First, we show that when the input comes from a mixture of two spherical
Gaussians, a variant of the 2-means algorithm successfully isolates the
subspace containing the means of the mixture components. Second, we show an
exact expression for the convergence of our variant of the 2-means algorithm,
when the input is a very large number of samples from a mixture of spherical
Gaussians. Our analysis does not require any lower bound on the separation
between the mixture components.
Finally, we study the sample requirement of $k$-means; for a mixture of 2
spherical Gaussians, we show an upper bound on the number of samples required
by a variant of 2-means to get close to the true solution. The sample
requirement grows with increasing dimensionality of the data, and decreasing
separation between the means of the Gaussians. To match our upper bound, we
show an information-theoretic lower bound on any algorithm that learns mixtures
of two spherical Gaussians; our lower bound indicates that in the case when the
overlap between the probability masses of the two distributions is small, the
sample requirement of $k$-means is {\em near-optimal}.
| [
"Kamalika Chaudhuri, Sanjoy Dasgupta, Andrea Vattani",
"['Kamalika Chaudhuri' 'Sanjoy Dasgupta' 'Andrea Vattani']"
] |
cs.LG cs.CV | null | 0912.0572 | null | null | http://arxiv.org/pdf/0912.0572v1 | 2009-12-03T03:05:59Z | 2009-12-03T03:05:59Z | Isometric Multi-Manifolds Learning | Isometric feature mapping (Isomap) is a promising manifold learning method.
However, Isomap fails to work on data which distribute on clusters in a single
manifold or manifolds. Many works have been done on extending Isomap to
multi-manifolds learning. In this paper, we first proposed a new
multi-manifolds learning algorithm (M-Isomap) with help of a general procedure.
The new algorithm preserves intra-manifold geodesics and multiple
inter-manifolds edges precisely. Compared with previous methods, this algorithm
can isometrically learn data distributed on several manifolds. Secondly, the
original multi-cluster manifold learning algorithm first proposed in
\cite{DCIsomap} and called D-C Isomap has been revised so that the revised D-C
Isomap can learn multi-manifolds data. Finally, the features and effectiveness
of the proposed multi-manifolds learning algorithms are demonstrated and
compared through experiments.
| [
"['Mingyu Fan' 'Hong Qiao' 'Bo Zhang']",
"Mingyu Fan, Hong Qiao, and Bo Zhang"
] |
quant-ph cs.LG | null | 0912.0779 | null | null | http://arxiv.org/pdf/0912.0779v1 | 2009-12-04T06:30:27Z | 2009-12-04T06:30:27Z | Training a Large Scale Classifier with the Quantum Adiabatic Algorithm | In a previous publication we proposed discrete global optimization as a
method to train a strong binary classifier constructed as a thresholded sum
over weak classifiers. Our motivation was to cast the training of a classifier
into a format amenable to solution by the quantum adiabatic algorithm. Applying
adiabatic quantum computing (AQC) promises to yield solutions that are superior
to those which can be achieved with classical heuristic solvers. Interestingly
we found that by using heuristic solvers to obtain approximate solutions we
could already gain an advantage over the standard method AdaBoost. In this
communication we generalize the baseline method to large scale classifier
training. By large scale we mean that either the cardinality of the dictionary
of candidate weak classifiers or the number of weak learners used in the strong
classifier exceed the number of variables that can be handled effectively in a
single global optimization. For such situations we propose an iterative and
piecewise approach in which a subset of weak classifiers is selected in each
iteration via global optimization. The strong classifier is then constructed by
concatenating the subsets of weak classifiers. We show in numerical studies
that the generalized method again successfully competes with AdaBoost. We also
provide theoretical arguments as to why the proposed optimization method, which
does not only minimize the empirical loss but also adds L0-norm regularization,
is superior to versions of boosting that only minimize the empirical loss. By
conducting a Quantum Monte Carlo simulation we gather evidence that the quantum
adiabatic algorithm is able to handle a generic training problem efficiently.
| [
"Hartmut Neven, Vasil S. Denchev, Geordie Rose, William G. Macready",
"['Hartmut Neven' 'Vasil S. Denchev' 'Geordie Rose' 'William G. Macready']"
] |
cs.LG cs.NE | null | 0912.1007 | null | null | http://arxiv.org/pdf/0912.1007v1 | 2009-12-05T12:41:40Z | 2009-12-05T12:41:40Z | Designing Kernel Scheme for Classifiers Fusion | In this paper, we propose a special fusion method for combining ensembles of
base classifiers utilizing new neural networks in order to improve overall
efficiency of classification. While ensembles are designed such that each
classifier is trained independently while the decision fusion is performed as a
final procedure, in this method, we would be interested in making the fusion
process more adaptive and efficient. This new combiner, called Neural Network
Kernel Least Mean Square1, attempts to fuse outputs of the ensembles of
classifiers. The proposed Neural Network has some special properties such as
Kernel abilities,Least Mean Square features, easy learning over variants of
patterns and traditional neuron capabilities. Neural Network Kernel Least Mean
Square is a special neuron which is trained with Kernel Least Mean Square
properties. This new neuron is used as a classifiers combiner to fuse outputs
of base neural network classifiers. Performance of this method is analyzed and
compared with other fusion methods. The analysis represents higher performance
of our new method as opposed to others.
| [
"['Mehdi Salkhordeh Haghighi' 'Hadi Sadoghi Yazdi' 'Abedin Vahedian'\n 'Hamed Modaghegh']",
"Mehdi Salkhordeh Haghighi, Hadi Sadoghi Yazdi, Abedin Vahedian, Hamed\n Modaghegh"
] |
cs.CV cs.LG | null | 0912.1009 | null | null | http://arxiv.org/pdf/0912.1009v1 | 2009-12-05T12:54:24Z | 2009-12-05T12:54:24Z | Biogeography based Satellite Image Classification | Biogeography is the study of the geographical distribution of biological
organisms. The mindset of the engineer is that we can learn from nature.
Biogeography Based Optimization is a burgeoning nature inspired technique to
find the optimal solution of the problem. Satellite image classification is an
important task because it is the only way we can know about the land cover map
of inaccessible areas. Though satellite images have been classified in past by
using various techniques, the researchers are always finding alternative
strategies for satellite image classification so that they may be prepared to
select the most appropriate technique for the feature extraction task in hand.
This paper is focused on classification of the satellite image of a particular
land cover using the theory of Biogeography based Optimization. The original
BBO algorithm does not have the inbuilt property of clustering which is
required during image classification. Hence modifications have been proposed to
the original algorithm and the modified algorithm is used to classify the
satellite image of a given region. The results indicate that highly accurate
land cover features can be extracted effectively when the proposed algorithm is
used.
| [
"V.K.Panchal, Parminder Singh, Navdeep Kaur, Harish Kundra",
"['V. K. Panchal' 'Parminder Singh' 'Navdeep Kaur' 'Harish Kundra']"
] |
cs.CR cs.LG | null | 0912.1014 | null | null | http://arxiv.org/pdf/0912.1014v1 | 2009-12-05T13:15:08Z | 2009-12-05T13:15:08Z | An ensemble approach for feature selection of Cyber Attack Dataset | Feature selection is an indispensable preprocessing step when mining huge
datasets that can significantly improve the overall system performance.
Therefore in this paper we focus on a hybrid approach of feature selection.
This method falls into two phases. The filter phase select the features with
highest information gain and guides the initialization of search process for
wrapper phase whose output the final feature subset. The final feature subsets
are passed through the Knearest neighbor classifier for classification of
attacks. The effectiveness of this algorithm is demonstrated on DARPA KDDCUP99
cyber attack dataset.
| [
"Shailendra Singh, Sanjay Silakari",
"['Shailendra Singh' 'Sanjay Silakari']"
] |
stat.ML cs.LG | null | 0912.1128 | null | null | http://arxiv.org/pdf/0912.1128v1 | 2009-12-06T19:29:04Z | 2009-12-06T19:29:04Z | How to Explain Individual Classification Decisions | After building a classifier with modern tools of machine learning we
typically have a black box at hand that is able to predict well for unseen
data. Thus, we get an answer to the question what is the most likely label of a
given unseen data point. However, most methods will provide no answer why the
model predicted the particular label for a single instance and what features
were most influential for that particular instance. The only method that is
currently able to provide such explanations are decision trees. This paper
proposes a procedure which (based on a set of assumptions) allows to explain
the decisions of any classification method.
| [
"David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe,\n Katja Hansen, Klaus-Robert Mueller",
"['David Baehrens' 'Timon Schroeter' 'Stefan Harmeling' 'Motoaki Kawanabe'\n 'Katja Hansen' 'Klaus-Robert Mueller']"
] |
cs.CR cs.GT cs.LG | 10.1007/978-3-642-14577-3_16 | 0912.1155 | null | null | http://arxiv.org/abs/0912.1155v2 | 2009-12-22T04:36:44Z | 2009-12-07T01:45:32Z | A Learning-Based Approach to Reactive Security | Despite the conventional wisdom that proactive security is superior to
reactive security, we show that reactive security can be competitive with
proactive security as long as the reactive defender learns from past attacks
instead of myopically overreacting to the last attack. Our game-theoretic model
follows common practice in the security literature by making worst-case
assumptions about the attacker: we grant the attacker complete knowledge of the
defender's strategy and do not require the attacker to act rationally. In this
model, we bound the competitive ratio between a reactive defense algorithm
(which is inspired by online learning theory) and the best fixed proactive
defense. Additionally, we show that, unlike proactive defenses, this reactive
strategy is robust to a lack of information about the attacker's incentives and
knowledge.
| [
"Adam Barth, Benjamin I. P. Rubinstein, Mukund Sundararajan, John C.\n Mitchell, Dawn Song, Peter L. Bartlett",
"['Adam Barth' 'Benjamin I. P. Rubinstein' 'Mukund Sundararajan'\n 'John C. Mitchell' 'Dawn Song' 'Peter L. Bartlett']"
] |
cs.LG | null | 0912.1198 | null | null | http://arxiv.org/pdf/0912.1198v1 | 2009-12-07T10:35:56Z | 2009-12-07T10:35:56Z | Delay-Optimal Power and Subcarrier Allocation for OFDMA Systems via
Stochastic Approximation | In this paper, we consider delay-optimal power and subcarrier allocation
design for OFDMA systems with $N_F$ subcarriers, $K$ mobiles and one base
station. There are $K$ queues at the base station for the downlink traffic to
the $K$ mobiles with heterogeneous packet arrivals and delay requirements. We
shall model the problem as a $K$-dimensional infinite horizon average reward
Markov Decision Problem (MDP) where the control actions are assumed to be a
function of the instantaneous Channel State Information (CSI) as well as the
joint Queue State Information (QSI). This problem is challenging because it
corresponds to a stochastic Network Utility Maximization (NUM) problem where
general solution is still unknown. We propose an {\em online stochastic value
iteration} solution using {\em stochastic approximation}. The proposed power
control algorithm, which is a function of both the CSI and the QSI, takes the
form of multi-level water-filling. We prove that under two mild conditions in
Theorem 1 (One is the stepsize condition. The other is the condition on
accessibility of the Markov Chain, which can be easily satisfied in most of the
cases we are interested.), the proposed solution converges to the optimal
solution almost surely (with probability 1) and the proposed framework offers a
possible solution to the general stochastic NUM problem. By exploiting the
birth-death structure of the queue dynamics, we obtain a reduced complexity
decomposed solution with linear $\mathcal{O}(KN_F)$ complexity and
$\mathcal{O}(K)$ memory requirement.
| [
"['Vincent K. N. Lau' 'Ying Cui']",
"Vincent K.N.Lau and Ying Cui"
] |
cs.LG | null | 0912.1822 | null | null | http://arxiv.org/pdf/0912.1822v1 | 2009-12-09T18:11:11Z | 2009-12-09T18:11:11Z | Association Rule Pruning based on Interestingness Measures with
Clustering | Association rule mining plays vital part in knowledge mining. The difficult
task is discovering knowledge or useful rules from the large number of rules
generated for reduced support. For pruning or grouping rules, several
techniques are used such as rule structure cover methods, informative cover
methods, rule clustering, etc. Another way of selecting association rules is
based on interestingness measures such as support, confidence, correlation, and
so on. In this paper, we study how rule clusters of the pattern Xi - Y are
distributed over different interestingness measures.
| [
"['S. Kannan' 'R. Bhaskaran']",
"S.Kannan and R.Bhaskaran"
] |
cs.CV cs.LG | null | 0912.1830 | null | null | http://arxiv.org/pdf/0912.1830v1 | 2009-12-09T18:41:49Z | 2009-12-09T18:41:49Z | Gesture Recognition with a Focus on Important Actions by Using a Path
Searching Method in Weighted Graph | This paper proposes a method of gesture recognition with a focus on important
actions for distinguishing similar gestures. The method generates a partial
action sequence by using optical flow images, expresses the sequence in the
eigenspace, and checks the feature vector sequence by applying an optimum
path-searching method of weighted graph to focus the important actions. Also
presented are the results of an experiment on the recognition of similar sign
language words.
| [
"['Kazumoto Tanaka']",
"Kazumoto Tanaka"
] |
cs.CV cs.LG | null | 0912.2302 | null | null | http://arxiv.org/pdf/0912.2302v1 | 2009-12-11T18:14:29Z | 2009-12-11T18:14:29Z | Synthesis of supervised classification algorithm using intelligent and
statistical tools | A fundamental task in detecting foreground objects in both static and dynamic
scenes is to take the best choice of color system representation and the
efficient technique for background modeling. We propose in this paper a
non-parametric algorithm dedicated to segment and to detect objects in color
images issued from a football sports meeting. Indeed segmentation by pixel
concern many applications and revealed how the method is robust to detect
objects, even in presence of strong shadows and highlights. In the other hand
to refine their playing strategy such as in football, handball, volley ball,
Rugby..., the coach need to have a maximum of technical-tactics information
about the on-going of the game and the players. We propose in this paper a
range of algorithms allowing the resolution of many problems appearing in the
automated process of team identification, where each player is affected to his
corresponding team relying on visual data. The developed system was tested on a
match of the Tunisian national competition. This work is prominent for many
next computer vision studies as it's detailed in this study.
| [
"['Ali Douik' 'Mourad Moussa Jlassi']",
"Ali Douik, Mourad Moussa Jlassi"
] |
cs.LG | null | 0912.2314 | null | null | http://arxiv.org/pdf/0912.2314v1 | 2009-12-11T18:50:46Z | 2009-12-11T18:50:46Z | Early Detection of Breast Cancer using SVM Classifier Technique | This paper presents a tumor detection algorithm from mammogram. The proposed
system focuses on the solution of two problems. One is how to detect tumors as
suspicious regions with a very weak contrast to their background and another is
how to extract features which categorize tumors. The tumor detection method
follows the scheme of (a) mammogram enhancement. (b) The segmentation of the
tumor area. (c) The extraction of features from the segmented tumor area. (d)
The use of SVM classifier. The enhancement can be defined as conversion of the
image quality to a better and more understandable level. The mammogram
enhancement procedure includes filtering, top hat operation, DWT. Then the
contrast stretching is used to increase the contrast of the image. The
segmentation of mammogram images has been playing an important role to improve
the detection and diagnosis of breast cancer. The most common segmentation
method used is thresholding. The features are extracted from the segmented
breast area. Next stage include, which classifies the regions using the SVM
classifier. The method was tested on 75 mammographic images, from the mini-MIAS
database. The methodology achieved a sensitivity of 88.75%.
| [
"['Y. Ireaneus Anna Rejani' 'S. Thamarai Selvi']",
"Y.Ireaneus Anna Rejani, S.Thamarai Selvi"
] |
cs.LG cs.AI | null | 0912.2385 | null | null | http://arxiv.org/pdf/0912.2385v1 | 2009-12-12T00:59:26Z | 2009-12-12T00:59:26Z | Closing the Learning-Planning Loop with Predictive State Representations | A central problem in artificial intelligence is that of planning to maximize
future reward under uncertainty in a partially observable environment. In this
paper we propose and demonstrate a novel algorithm which accurately learns a
model of such an environment directly from sequences of action-observation
pairs. We then close the loop from observations to actions by planning in the
learned model and recovering a policy which is near-optimal in the original
environment. Specifically, we present an efficient and statistically consistent
spectral algorithm for learning the parameters of a Predictive State
Representation (PSR). We demonstrate the algorithm by learning a model of a
simulated high-dimensional, vision-based mobile robot planning task, and then
perform approximate point-based planning in the learned PSR. Analysis of our
results shows that the algorithm learns a state space which efficiently
captures the essential features of the environment. This representation allows
accurate prediction with a small number of parameters, and enables successful
and efficient planning.
| [
"Byron Boots, Sajid M. Siddiqi, Geoffrey J. Gordon",
"['Byron Boots' 'Sajid M. Siddiqi' 'Geoffrey J. Gordon']"
] |
cs.CC cs.LG | null | 0912.2709 | null | null | http://arxiv.org/pdf/0912.2709v1 | 2009-12-14T19:14:03Z | 2009-12-14T19:14:03Z | The Gaussian Surface Area and Noise Sensitivity of Degree-$d$
Polynomials | We provide asymptotically sharp bounds for the Gaussian surface area and the
Gaussian noise sensitivity of polynomial threshold functions. In particular we
show that if $f$ is a degree-$d$ polynomial threshold function, then its
Gaussian sensitivity at noise rate $\epsilon$ is less than some quantity
asymptotic to $\frac{d\sqrt{2\epsilon}}{\pi}$ and the Gaussian surface area is
at most $\frac{d}{\sqrt{2\pi}}$. Furthermore these bounds are asymptotically
tight as $\epsilon\to 0$ and $f$ the threshold function of a product of $d$
distinct homogeneous linear functions.
| [
"Daniel M. Kane",
"['Daniel M. Kane']"
] |
cs.NE cs.CR cs.LG | null | 0912.2843 | null | null | http://arxiv.org/pdf/0912.2843v2 | 2010-05-30T09:00:50Z | 2009-12-15T10:57:58Z | Intrusion Detection In Mobile Ad Hoc Networks Using GA Based Feature
Selection | Mobile ad hoc networking (MANET) has become an exciting and important
technology in recent years because of the rapid proliferation of wireless
devices. MANETs are highly vulnerable to attacks due to the open medium,
dynamically changing network topology and lack of centralized monitoring point.
It is important to search new architecture and mechanisms to protect the
wireless networks and mobile computing application. IDS analyze the network
activities by means of audit data and use patterns of well-known attacks or
normal profile to detect potential attacks. There are two methods to analyze:
misuse detection and anomaly detection. Misuse detection is not effective
against unknown attacks and therefore, anomaly detection method is used. In
this approach, the audit data is collected from each mobile node after
simulating the attack and compared with the normal behavior of the system. If
there is any deviation from normal behavior then the event is considered as an
attack. Some of the features of collected audit data may be redundant or
contribute little to the detection process. So it is essential to select the
important features to increase the detection rate. This paper focuses on
implementing two feature selection methods namely, markov blanket discovery and
genetic algorithm. In genetic algorithm, bayesian network is constructed over
the collected features and fitness function is calculated. Based on the fitness
value the features are selected. Markov blanket discovery also uses bayesian
network and the features are selected depending on the minimum description
length. During the evaluation phase, the performances of both approaches are
compared based on detection rate and false alarm rate.
| [
"['R. Nallusamy' 'K. Jayarajan' 'K. Duraiswamy']",
"R.Nallusamy, K.Jayarajan, K.Duraiswamy"
] |
cs.LG | null | 0912.3983 | null | null | http://arxiv.org/pdf/0912.3983v1 | 2009-12-20T05:21:45Z | 2009-12-20T05:21:45Z | Performance Analysis of AIM-K-means & K-means in Quality Cluster
Generation | Among all the partition based clustering algorithms K-means is the most
popular and well known method. It generally shows impressive results even in
considerably large data sets. The computational complexity of K-means does not
suffer from the size of the data set. The main disadvantage faced in performing
this clustering is that the selection of initial means. If the user does not
have adequate knowledge about the data set, it may lead to erroneous results.
The algorithm Automatic Initialization of Means (AIM), which is an extension to
K-means, has been proposed to overcome the problem of initial mean generation.
In this paper an attempt has been made to compare the performance of the
algorithms through implementation
| [
"['Samarjeet Borah' 'Mrinal Kanti Ghose']",
"Samarjeet Borah, Mrinal Kanti Ghose"
] |
cs.LG | 10.1109/TIT.2011.2182033 | 0912.3995 | null | null | http://arxiv.org/abs/0912.3995v4 | 2010-06-09T23:24:13Z | 2009-12-21T00:08:19Z | Gaussian Process Optimization in the Bandit Setting: No Regret and
Experimental Design | Many applications require optimizing an unknown, noisy function that is
expensive to evaluate. We formalize this task as a multi-armed bandit problem,
where the payoff function is either sampled from a Gaussian process (GP) or has
low RKHS norm. We resolve the important open problem of deriving regret bounds
for this setting, which imply novel convergence rates for GP optimization. We
analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its
cumulative regret in terms of maximal information gain, establishing a novel
connection between GP optimization and experimental design. Moreover, by
bounding the latter in terms of operator spectra, we obtain explicit sublinear
regret bounds for many commonly used covariance functions. In some important
cases, our bounds have surprisingly weak dependence on the dimensionality. In
our experiments on real sensor data, GP-UCB compares favorably with other
heuristical GP optimization approaches.
| [
"['Niranjan Srinivas' 'Andreas Krause' 'Sham M. Kakade' 'Matthias Seeger']",
"Niranjan Srinivas, Andreas Krause, Sham M. Kakade and Matthias Seeger"
] |
cs.LG cs.AI | null | 0912.4473 | null | null | http://arxiv.org/pdf/0912.4473v2 | 2010-06-26T22:47:44Z | 2009-12-22T18:03:55Z | Learning to Predict Combinatorial Structures | The major challenge in designing a discriminative learning algorithm for
predicting structured data is to address the computational issues arising from
the exponential size of the output space. Existing algorithms make different
assumptions to ensure efficient, polynomial time estimation of model
parameters. For several combinatorial structures, including cycles, partially
ordered sets, permutations and other graph classes, these assumptions do not
hold. In this thesis, we address the problem of designing learning algorithms
for predicting combinatorial structures by introducing two new assumptions: (i)
The first assumption is that a particular counting problem can be solved
efficiently. The consequence is a generalisation of the classical ridge
regression for structured prediction. (ii) The second assumption is that a
particular sampling problem can be solved efficiently. The consequence is a new
technique for designing and analysing probabilistic structured prediction
models. These results can be applied to solve several complex learning problems
including but not limited to multi-label classification, multi-category
hierarchical classification, and label ranking.
| [
"['Shankar Vembu']",
"Shankar Vembu"
] |
cs.LG cs.AI cs.IT math.IT math.ST stat.TH | null | 0912.4883 | null | null | http://arxiv.org/pdf/0912.4883v1 | 2009-12-24T15:29:32Z | 2009-12-24T15:29:32Z | On Finding Predictors for Arbitrary Families of Processes | The problem is sequence prediction in the following setting. A sequence
$x_1,...,x_n,...$ of discrete-valued observations is generated according to
some unknown probabilistic law (measure) $\mu$. After observing each outcome,
it is required to give the conditional probabilities of the next observation.
The measure $\mu$ belongs to an arbitrary but known class $C$ of stochastic
process measures. We are interested in predictors $\rho$ whose conditional
probabilities converge (in some sense) to the "true" $\mu$-conditional
probabilities if any $\mu\in C$ is chosen to generate the sequence. The
contribution of this work is in characterizing the families $C$ for which such
predictors exist, and in providing a specific and simple form in which to look
for a solution. We show that if any predictor works, then there exists a
Bayesian predictor, whose prior is discrete, and which works too. We also find
several sufficient and necessary conditions for the existence of a predictor,
in terms of topological characterizations of the family $C$, as well as in
terms of local behaviour of the measures in $C$, which in some cases lead to
procedures for constructing such predictors. It should be emphasized that the
framework is completely general: the stochastic processes considered are not
required to be i.i.d., stationary, or to belong to any parametric or countable
family.
| [
"Daniil Ryabko (INRIA Futurs, Lifl)",
"['Daniil Ryabko']"
] |
cs.CC cs.CG cs.DM cs.LG math.PR | 10.1145/2395116.2395118 | 0912.4884 | null | null | http://arxiv.org/abs/0912.4884v2 | 2012-09-12T23:33:10Z | 2009-12-24T15:35:56Z | An Invariance Principle for Polytopes | Let X be randomly chosen from {-1,1}^n, and let Y be randomly chosen from the
standard spherical Gaussian on R^n. For any (possibly unbounded) polytope P
formed by the intersection of k halfspaces, we prove that
|Pr [X belongs to P] - Pr [Y belongs to P]| < log^{8/5}k * Delta, where Delta
is a parameter that is small for polytopes formed by the intersection of
"regular" halfspaces (i.e., halfspaces with low influence). The novelty of our
invariance principle is the polylogarithmic dependence on k. Previously, only
bounds that were at least linear in k were known. We give two important
applications of our main result: (1) A polylogarithmic in k bound on the
Boolean noise sensitivity of intersections of k "regular" halfspaces (previous
work gave bounds linear in k). (2) A pseudorandom generator (PRG) with seed
length O((log n)*poly(log k,1/delta)) that delta-fools all polytopes with k
faces with respect to the Gaussian distribution. We also obtain PRGs with
similar parameters that fool polytopes formed by intersection of regular
halfspaces over the hypercube. Using our PRG constructions, we obtain the first
deterministic quasi-polynomial time algorithms for approximately counting the
number of solutions to a broad class of integer programs, including dense
covering problems and contingency tables.
| [
"['Prahladh Harsha' 'Adam Klivans' 'Raghu Meka']",
"Prahladh Harsha, Adam Klivans and Raghu Meka"
] |
cs.LG cs.AI | null | 0912.5029 | null | null | http://arxiv.org/pdf/0912.5029v1 | 2009-12-26T16:32:46Z | 2009-12-26T16:32:46Z | Complexity of stochastic branch and bound methods for belief tree search
in Bayesian reinforcement learning | There has been a lot of recent work on Bayesian methods for reinforcement
learning exhibiting near-optimal online performance. The main obstacle facing
such methods is that in most problems of interest, the optimal solution
involves planning in an infinitely large tree. However, it is possible to
obtain stochastic lower and upper bounds on the value of each tree node. This
enables us to use stochastic branch and bound algorithms to search the tree
efficiently. This paper proposes two such algorithms and examines their
complexity in this setting.
| [
"['Christos Dimitrakakis']",
"Christos Dimitrakakis"
] |
stat.ME cs.LG physics.soc-ph q-bio.QM stat.AP | 10.1214/09-AOAS321 | 0912.5193 | null | null | http://arxiv.org/abs/0912.5193v3 | 2013-08-29T06:50:07Z | 2009-12-28T17:56:50Z | Ranking relations using analogies in biological and information networks | Analogical reasoning depends fundamentally on the ability to learn and
generalize about relations between objects. We develop an approach to
relational learning which, given a set of pairs of objects
$\mathbf{S}=\{A^{(1)}:B^{(1)},A^{(2)}:B^{(2)},\ldots,A^{(N)}:B ^{(N)}\}$,
measures how well other pairs A:B fit in with the set $\mathbf{S}$. Our work
addresses the following question: is the relation between objects A and B
analogous to those relations found in $\mathbf{S}$? Such questions are
particularly relevant in information retrieval, where an investigator might
want to search for analogous pairs of objects that match the query set of
interest. There are many ways in which objects can be related, making the task
of measuring analogies very challenging. Our approach combines a similarity
measure on function spaces with Bayesian analysis to produce a ranking. It
requires data containing features of the objects of interest and a link matrix
specifying which relationships exist; no further attributes of such
relationships are necessary. We illustrate the potential of our method on text
analysis and information networks. An application on discovering functional
interactions between pairs of proteins is discussed in detail, where we show
that our approach can work in practice even if a small set of protein pairs is
provided.
| [
"Ricardo Silva, Katherine Heller, Zoubin Ghahramani, Edoardo M. Airoldi",
"['Ricardo Silva' 'Katherine Heller' 'Zoubin Ghahramani'\n 'Edoardo M. Airoldi']"
] |
stat.ME cs.LG physics.soc-ph q-bio.MN stat.ML | null | 0912.5410 | null | null | http://arxiv.org/pdf/0912.5410v1 | 2009-12-29T17:53:13Z | 2009-12-29T17:53:13Z | A survey of statistical network models | Networks are ubiquitous in science and have become a focal point for
discussion in everyday life. Formal statistical models for the analysis of
network data have emerged as a major topic of interest in diverse areas of
study, and most of these involve a form of graphical representation.
Probability models on graphs date back to 1959. Along with empirical studies in
social psychology and sociology from the 1960s, these early works generated an
active network community and a substantial literature in the 1970s. This effort
moved into the statistical literature in the late 1970s and 1980s, and the past
decade has seen a burgeoning network literature in statistical physics and
computer science. The growth of the World Wide Web and the emergence of online
networking communities such as Facebook, MySpace, and LinkedIn, and a host of
more specialized professional network communities has intensified interest in
the study of networks and network data. Our goal in this review is to provide
the reader with an entry point to this burgeoning literature. We begin with an
overview of the historical development of statistical network modeling and then
we introduce a number of examples that have been studied in the network
literature. Our subsequent discussion focuses on a number of prominent static
and dynamic network models and their interconnections. We emphasize formal
model descriptions, and pay special attention to the interpretation of
parameters and their estimation. We end with a description of some open
problems and challenges for machine learning and statistics.
| [
"['Anna Goldenberg' 'Alice X Zheng' 'Stephen E Fienberg'\n 'Edoardo M Airoldi']",
"Anna Goldenberg, Alice X Zheng, Stephen E Fienberg, Edoardo M Airoldi"
] |
cs.LG | null | 1001.0405 | null | null | http://arxiv.org/pdf/1001.0405v1 | 2010-01-03T19:54:40Z | 2010-01-03T19:54:40Z | Optimal Query Complexity for Reconstructing Hypergraphs | In this paper we consider the problem of reconstructing a hidden weighted
hypergraph of constant rank using additive queries. We prove the following: Let
$G$ be a weighted hidden hypergraph of constant rank with n vertices and $m$
hyperedges. For any $m$ there exists a non-adaptive algorithm that finds the
edges of the graph and their weights using $$ O(\frac{m\log n}{\log m}) $$
additive queries. This solves the open problem in [S. Choi, J. H. Kim. Optimal
Query Complexity Bounds for Finding Graphs. {\em STOC}, 749--758,~2008].
When the weights of the hypergraph are integers that are less than
$O(poly(n^d/m))$ where $d$ is the rank of the hypergraph (and therefore for
unweighted hypergraphs) there exists a non-adaptive algorithm that finds the
edges of the graph and their weights using $$ O(\frac{m\log \frac{n^d}{m}}{\log
m}). $$ additive queries.
Using the information theoretic bound the above query complexities are tight.
| [
"Nader H. Bshouty and Hanna Mazzawi",
"['Nader H. Bshouty' 'Hanna Mazzawi']"
] |
cs.CG cs.CV cs.LG | null | 1001.0591 | null | null | http://arxiv.org/pdf/1001.0591v2 | 2011-03-13T22:40:00Z | 2010-01-04T22:21:08Z | Comparing Distributions and Shapes using the Kernel Distance | Starting with a similarity function between objects, it is possible to define
a distance metric on pairs of objects, and more generally on probability
distributions over them. These distance metrics have a deep basis in functional
analysis, measure theory and geometric measure theory, and have a rich
structure that includes an isometric embedding into a (possibly infinite
dimensional) Hilbert space. They have recently been applied to numerous
problems in machine learning and shape analysis.
In this paper, we provide the first algorithmic analysis of these distance
metrics. Our main contributions are as follows: (i) We present fast
approximation algorithms for computing the kernel distance between two point
sets P and Q that runs in near-linear time in the size of (P cup Q) (note that
an explicit calculation would take quadratic time). (ii) We present
polynomial-time algorithms for approximately minimizing the kernel distance
under rigid transformation; they run in time O(n + poly(1/epsilon, log n)).
(iii) We provide several general techniques for reducing complex objects to
convenient sparse representations (specifically to point sets or sets of points
sets) which approximately preserve the kernel distance. In particular, this
allows us to reduce problems of computing the kernel distance between various
types of objects such as curves, surfaces, and distributions to computing the
kernel distance between point sets. These take advantage of the reproducing
kernel Hilbert space and a new relation linking binary range spaces to
continuous range spaces with bounded fat-shattering dimension.
| [
"['Sarang Joshi' 'Raj Varma Kommaraju' 'Jeff M. Phillips'\n 'Suresh Venkatasubramanian']",
"Sarang Joshi, Raj Varma Kommaraju, Jeff M. Phillips, and Suresh\n Venkatasubramanian"
] |
stat.ME cs.LG stat.ML | null | 1001.0597 | null | null | http://arxiv.org/pdf/1001.0597v2 | 2011-01-21T15:42:15Z | 2010-01-04T22:47:31Z | Inference of global clusters from locally distributed data | We consider the problem of analyzing the heterogeneity of clustering
distributions for multiple groups of observed data, each of which is indexed by
a covariate value, and inferring global clusters arising from observations
aggregated over the covariate domain. We propose a novel Bayesian nonparametric
method reposing on the formalism of spatial modeling and a nested hierarchy of
Dirichlet processes. We provide an analysis of the model properties, relating
and contrasting the notions of local and global clusters. We also provide an
efficient inference algorithm, and demonstrate the utility of our method in
several data examples, including the problem of object tracking and a global
clustering analysis of functional data where the functional identity
information is not available.
| [
"['XuanLong Nguyen']",
"XuanLong Nguyen"
] |
cs.LG cs.CY cs.IR | null | 1001.0700 | null | null | http://arxiv.org/pdf/1001.0700v1 | 2010-01-05T13:06:21Z | 2010-01-05T13:06:21Z | Vandalism Detection in Wikipedia: a Bag-of-Words Classifier Approach | A bag-of-words based probabilistic classifier is trained using regularized
logistic regression to detect vandalism in the English Wikipedia. Isotonic
regression is used to calibrate the class membership probabilities. Learning
curve, reliability, ROC, and cost analysis are performed.
| [
"Amit Belani",
"['Amit Belani']"
] |
cs.LG | null | 1001.0879 | null | null | http://arxiv.org/pdf/1001.0879v1 | 2010-01-06T12:40:13Z | 2010-01-06T12:40:13Z | Linear Probability Forecasting | Multi-class classification is one of the most important tasks in machine
learning. In this paper we consider two online multi-class classification
problems: classification by a linear model and by a kernelized model. The
quality of predictions is measured by the Brier loss function. We suggest two
computationally efficient algorithms to work with these problems and prove
theoretical guarantees on their losses. We kernelize one of the algorithms and
prove theoretical guarantees on its loss. We perform experiments and compare
our algorithms with logistic regression.
| [
"['Fedor Zhdanov' 'Yuri Kalnishkan']",
"Fedor Zhdanov and Yuri Kalnishkan"
] |
cs.NI cs.LG | null | 1001.1009 | null | null | http://arxiv.org/pdf/1001.1009v1 | 2010-01-06T23:33:49Z | 2010-01-06T23:33:49Z | Multi-path Probabilistic Available Bandwidth Estimation through Bayesian
Active Learning | Knowing the largest rate at which data can be sent on an end-to-end path such
that the egress rate is equal to the ingress rate with high probability can be
very practical when choosing transmission rates in video streaming or selecting
peers in peer-to-peer applications. We introduce probabilistic available
bandwidth, which is defined in terms of ingress rates and egress rates of
traffic on a path, rather than in terms of capacity and utilization of the
constituent links of the path like the standard available bandwidth metric. In
this paper, we describe a distributed algorithm, based on a probabilistic
graphical model and Bayesian active learning, for simultaneously estimating the
probabilistic available bandwidth of multiple paths through a network. Our
procedure exploits the fact that each packet train provides information not
only about the path it traverses, but also about any path that shares a link
with the monitored path. Simulations and PlanetLab experiments indicate that
this process can dramatically reduce the number of probes required to generate
accurate estimates.
| [
"Frederic Thouin (1), Mark Coates (1), Michael Rabbat (1) ((1) McGill\n University, Montreal, Canada)",
"['Frederic Thouin' 'Mark Coates' 'Michael Rabbat']"
] |
cs.LG cs.AI cs.CV | null | 1001.1020 | null | null | http://arxiv.org/pdf/1001.1020v1 | 2010-01-07T06:34:21Z | 2010-01-07T06:34:21Z | An Empirical Evaluation of Four Algorithms for Multi-Class
Classification: Mart, ABC-Mart, Robust LogitBoost, and ABC-LogitBoost | This empirical study is mainly devoted to comparing four tree-based boosting
algorithms: mart, abc-mart, robust logitboost, and abc-logitboost, for
multi-class classification on a variety of publicly available datasets. Some of
those datasets have been thoroughly tested in prior studies using a broad range
of classification algorithms including SVM, neural nets, and deep learning.
In terms of the empirical classification errors, our experiment results
demonstrate:
1. Abc-mart considerably improves mart. 2. Abc-logitboost considerably
improves (robust) logitboost. 3. Robust) logitboost} considerably improves mart
on most datasets. 4. Abc-logitboost considerably improves abc-mart on most
datasets. 5. These four boosting algorithms (especially abc-logitboost)
outperform SVM on many datasets. 6. Compared to the best deep learning methods,
these four boosting algorithms (especially abc-logitboost) are competitive.
| [
"Ping Li",
"['Ping Li']"
] |
cs.CV cs.LG | null | 1001.1027 | null | null | http://arxiv.org/pdf/1001.1027v5 | 2017-06-07T17:05:16Z | 2010-01-07T06:22:56Z | An Unsupervised Algorithm For Learning Lie Group Transformations | We present several theoretical contributions which allow Lie groups to be fit
to high dimensional datasets. Transformation operators are represented in their
eigen-basis, reducing the computational complexity of parameter estimation to
that of training a linear transformation model. A transformation specific
"blurring" operator is introduced that allows inference to escape local minima
via a smoothing of the transformation space. A penalty on traversed manifold
distance is added which encourages the discovery of sparse, minimal distance,
transformations between states. Both learning and inference are demonstrated
using these methods for the full set of affine transformations on natural image
patches. Transformation operators are then trained on natural video sequences.
It is shown that the learned video transformations provide a better description
of inter-frame differences than the standard motion model based on rigid
translation.
| [
"['Jascha Sohl-Dickstein' 'Ching Ming Wang' 'Bruno A. Olshausen']",
"Jascha Sohl-Dickstein, Ching Ming Wang, Bruno A. Olshausen"
] |
cs.LG | null | 1001.1079 | null | null | http://arxiv.org/pdf/1001.1079v1 | 2010-01-07T14:41:21Z | 2010-01-07T14:41:21Z | Measuring Latent Causal Structure | Discovering latent representations of the observed world has become
increasingly more relevant in data analysis. Much of the effort concentrates on
building latent variables which can be used in prediction problems, such as
classification and regression. A related goal of learning latent structure from
data is that of identifying which hidden common causes generate the
observations, such as in applications that require predicting the effect of
policies. This will be the main problem tackled in our contribution: given a
dataset of indicators assumed to be generated by unknown and unmeasured common
causes, we wish to discover which hidden common causes are those, and how they
generate our data. This is possible under the assumption that observed
variables are linear functions of the latent causes with additive noise.
Previous results in the literature present solutions for the case where each
observed variable is a noisy function of a single latent variable. We show how
to extend the existing results for some cases where observed variables measure
more than one latent variable.
| [
"Ricardo Silva",
"['Ricardo Silva']"
] |
cs.CV cs.LG | null | 1001.2605 | null | null | http://arxiv.org/pdf/1001.2605v1 | 2010-01-15T03:03:24Z | 2010-01-15T03:03:24Z | An Explicit Nonlinear Mapping for Manifold Learning | Manifold learning is a hot research topic in the field of computer science
and has many applications in the real world. A main drawback of manifold
learning methods is, however, that there is no explicit mappings from the input
data manifold to the output embedding. This prohibits the application of
manifold learning methods in many practical problems such as classification and
target detection. Previously, in order to provide explicit mappings for
manifold learning methods, many methods have been proposed to get an
approximate explicit representation mapping with the assumption that there
exists a linear projection between the high-dimensional data samples and their
low-dimensional embedding. However, this linearity assumption may be too
restrictive. In this paper, an explicit nonlinear mapping is proposed for
manifold learning, based on the assumption that there exists a polynomial
mapping between the high-dimensional data samples and their low-dimensional
representations. As far as we know, this is the first time that an explicit
nonlinear mapping for manifold learning is given. In particular, we apply this
to the method of Locally Linear Embedding (LLE) and derive an explicit
nonlinear manifold learning algorithm, named Neighborhood Preserving Polynomial
Embedding (NPPE). Experimental results on both synthetic and real-world data
show that the proposed mapping is much more effective in preserving the local
neighborhood information and the nonlinear geometry of the high-dimensional
data samples than previous work.
| [
"['Hong Qiao' 'Peng Zhang' 'Di Wang' 'Bo Zhang']",
"Hong Qiao, Peng Zhang, Di Wang, Bo Zhang"
] |
cs.LG cs.AI | null | 1001.2709 | null | null | http://arxiv.org/pdf/1001.2709v1 | 2010-01-15T15:10:39Z | 2010-01-15T15:10:39Z | Kernel machines with two layers and multiple kernel learning | In this paper, the framework of kernel machines with two layers is
introduced, generalizing classical kernel methods. The new learning methodology
provide a formal connection between computational architectures with multiple
layers and the theme of kernel learning in standard regularization methods.
First, a representer theorem for two-layer networks is presented, showing that
finite linear combinations of kernels on each layer are optimal architectures
whenever the corresponding functions solve suitable variational problems in
reproducing kernel Hilbert spaces (RKHS). The input-output map expressed by
these architectures turns out to be equivalent to a suitable single-layer
kernel machines in which the kernel function is also learned from the data.
Recently, the so-called multiple kernel learning methods have attracted
considerable attention in the machine learning literature. In this paper,
multiple kernel learning methods are shown to be specific cases of kernel
machines with two layers in which the second layer is linear. Finally, a simple
and effective multiple kernel learning method called RLS2 (regularized least
squares with two layers) is introduced, and his performances on several
learning problems are extensively analyzed. An open source MATLAB toolbox to
train and validate RLS2 models with a Graphic User Interface is available.
| [
"['Francesco Dinuzzo']",
"Francesco Dinuzzo"
] |
nlin.AO cond-mat.dis-nn cs.AI cs.LG stat.ML | null | 1001.2813 | null | null | http://arxiv.org/pdf/1001.2813v1 | 2010-01-18T01:10:17Z | 2010-01-18T01:10:17Z | A Monte Carlo Algorithm for Universally Optimal Bayesian Sequence
Prediction and Planning | The aim of this work is to address the question of whether we can in
principle design rational decision-making agents or artificial intelligences
embedded in computable physics such that their decisions are optimal in
reasonable mathematical senses. Recent developments in rare event probability
estimation, recursive bayesian inference, neural networks, and probabilistic
planning are sufficient to explicitly approximate reinforcement learners of the
AIXI style with non-trivial model classes (here, the class of resource-bounded
Turing machines). Consideration of the effects of resource limitations in a
concrete implementation leads to insights about possible architectures for
learning systems using optimal decision makers as components.
| [
"['Anthony Di Franco']",
"Anthony Di Franco"
] |
cs.LG | 10.1088/1742-6596/233/1/012014 | 1001.2957 | null | null | http://arxiv.org/abs/1001.2957v2 | 2010-03-16T04:47:17Z | 2010-01-18T05:34:09Z | Asymptotic Learning Curve and Renormalizable Condition in Statistical
Learning Theory | Bayes statistics and statistical physics have the common mathematical
structure, where the log likelihood function corresponds to the random
Hamiltonian. Recently, it was discovered that the asymptotic learning curves in
Bayes estimation are subject to a universal law, even if the log likelihood
function can not be approximated by any quadratic form. However, it is left
unknown what mathematical property ensures such a universal law. In this paper,
we define a renormalizable condition of the statistical estimation problem, and
show that, under such a condition, the asymptotic learning curves are ensured
to be subject to the universal law, even if the true distribution is
unrealizable and singular for a statistical model. Also we study a
nonrenormalizable case, in which the learning curves have the different
asymptotic behaviors from the universal law.
| [
"['Sumio Watanabe']",
"Sumio Watanabe"
] |
cs.IT cs.LG math.IT math.ST stat.TH | 10.1109/ISIT.2010.5513384 | 1001.3090 | null | null | http://arxiv.org/abs/1001.3090v2 | 2010-06-13T19:18:47Z | 2010-01-18T17:07:03Z | Feature Extraction for Universal Hypothesis Testing via Rank-constrained
Optimization | This paper concerns the construction of tests for universal hypothesis
testing problems, in which the alternate hypothesis is poorly modeled and the
observation space is large. The mismatched universal test is a feature-based
technique for this purpose. In prior work it is shown that its
finite-observation performance can be much better than the (optimal) Hoeffding
test, and good performance depends crucially on the choice of features. The
contributions of this paper include: 1) We obtain bounds on the number of
\epsilon distinguishable distributions in an exponential family. 2) This
motivates a new framework for feature extraction, cast as a rank-constrained
optimization problem. 3) We obtain a gradient-based algorithm to solve the
rank-constrained optimization problem and prove its local convergence.
| [
"Dayu Huang, Sean Meyn",
"['Dayu Huang' 'Sean Meyn']"
] |
cs.IT cs.LG math.IT math.ST stat.TH | 10.1109/TIT.2010.2094817 | 1001.3448 | null | null | http://arxiv.org/abs/1001.3448v4 | 2011-01-27T18:55:05Z | 2010-01-20T02:57:15Z | The dynamics of message passing on dense graphs, with applications to
compressed sensing | Approximate message passing algorithms proved to be extremely effective in
reconstructing sparse signals from a small number of incoherent linear
measurements. Extensive numerical experiments further showed that their
dynamics is accurately tracked by a simple one-dimensional iteration termed
state evolution. In this paper we provide the first rigorous foundation to
state evolution. We prove that indeed it holds asymptotically in the large
system limit for sensing matrices with independent and identically distributed
gaussian entries.
While our focus is on message passing algorithms for compressed sensing, the
analysis extends beyond this setting, to a general class of algorithms on dense
graphs. In this context, state evolution plays the role that density evolution
has for sparse graphs.
The proof technique is fundamentally different from the standard approach to
density evolution, in that it copes with large number of short loops in the
underlying factor graph. It relies instead on a conditioning technique recently
developed by Erwin Bolthausen in the context of spin glass theory.
| [
"['Mohsen Bayati' 'Andrea Montanari']",
"Mohsen Bayati and Andrea Montanari"
] |
cs.LG | null | 1001.3478 | null | null | http://arxiv.org/pdf/1001.3478v1 | 2010-01-20T07:30:02Z | 2010-01-20T07:30:02Z | Role of Interestingness Measures in CAR Rule Ordering for Associative
Classifier: An Empirical Approach | Associative Classifier is a novel technique which is the integration of
Association Rule Mining and Classification. The difficult task in building
Associative Classifier model is the selection of relevant rules from a large
number of class association rules (CARs). A very popular method of ordering
rules for selection is based on confidence, support and antecedent size (CSA).
Other methods are based on hybrid orderings in which CSA method is combined
with other measures. In the present work, we study the effect of using
different interestingness measures of Association rules in CAR rule ordering
and selection for associative classifier.
| [
"['S. Kannan' 'R. Bhaskaran']",
"S.Kannan, R.Bhaskaran"
] |
cs.CV cs.LG | null | 1001.4140 | null | null | http://arxiv.org/pdf/1001.4140v1 | 2010-01-23T08:53:49Z | 2010-01-23T08:53:49Z | SVM-based Multiview Face Recognition by Generalization of Discriminant
Analysis | Identity verification of authentic persons by their multiview faces is a real
valued problem in machine vision. Multiview faces are having difficulties due
to non-linear representation in the feature space. This paper illustrates the
usability of the generalization of LDA in the form of canonical covariate for
face recognition to multiview faces. In the proposed work, the Gabor filter
bank is used to extract facial features that characterized by spatial
frequency, spatial locality and orientation. Gabor face representation captures
substantial amount of variations of the face instances that often occurs due to
illumination, pose and facial expression changes. Convolution of Gabor filter
bank to face images of rotated profile views produce Gabor faces with high
dimensional features vectors. Canonical covariate is then used to Gabor faces
to reduce the high dimensional feature spaces into low dimensional subspaces.
Finally, support vector machines are trained with canonical sub-spaces that
contain reduced set of features and perform recognition task. The proposed
system is evaluated with UMIST face database. The experiment results
demonstrate the efficiency and robustness of the proposed system with high
recognition rates.
| [
"['Dakshina Ranjan Kisku' 'Hunny Mehrotra' 'Jamuna Kanta Sing'\n 'Phalguni Gupta']",
"Dakshina Ranjan Kisku, Hunny Mehrotra, Jamuna Kanta Sing, Phalguni\n Gupta"
] |
cs.NE cs.LG | null | 1001.4301 | null | null | http://arxiv.org/pdf/1001.4301v1 | 2010-01-25T02:09:42Z | 2010-01-25T02:09:42Z | Probabilistic Approach to Neural Networks Computation Based on Quantum
Probability Model Probabilistic Principal Subspace Analysis Example | In this paper, we introduce elements of probabilistic model that is suitable
for modeling of learning algorithms in biologically plausible artificial neural
networks framework. Model is based on two of the main concepts in quantum
physics - a density matrix and the Born rule. As an example, we will show that
proposed probabilistic interpretation is suitable for modeling of on-line
learning algorithms for PSA, which are preferably realized by a parallel
hardware based on very simple computational units. Proposed concept (model) can
be used in the context of improving algorithm convergence speed, learning
factor choice, or input signal scale robustness. We are going to see how the
Born rule and the Hebbian learning rule are connected
| [
"['Marko V. Jankovic']",
"Marko V. Jankovic"
] |
cs.LG cs.SY math.OC math.ST stat.TH | null | 1001.4475 | null | null | http://arxiv.org/pdf/1001.4475v2 | 2011-04-13T07:03:48Z | 2010-01-25T16:30:15Z | X-Armed Bandits | We consider a generalization of stochastic bandits where the set of arms,
$\cX$, is allowed to be a generic measurable space and the mean-payoff function
is "locally Lipschitz" with respect to a dissimilarity function that is known
to the decision maker. Under this condition we construct an arm selection
policy, called HOO (hierarchical optimistic optimization), with improved regret
bounds compared to previous results for a large class of problems. In
particular, our results imply that if $\cX$ is the unit hypercube in a
Euclidean space and the mean-payoff function has a finite number of global
maxima around which the behavior of the function is locally continuous with a
known smoothness degree, then the expected regret of HOO is bounded up to a
logarithmic factor by $\sqrt{n}$, i.e., the rate of growth of the regret is
independent of the dimension of the space. We also prove the minimax optimality
of our algorithm when the dissimilarity is a metric. Our basic strategy has
quadratic computational complexity as a function of the number of time steps
and does not rely on the doubling trick. We also introduce a modified strategy,
which relies on the doubling trick but runs in linearithmic time. Both results
are improvements with respect to previous approaches.
| [
"['Sébastien Bubeck' 'Rémi Munos' 'Gilles Stoltz' 'Csaba Szepesvari']",
"S\\'ebastien Bubeck (INRIA Futurs), R\\'emi Munos (INRIA Lille - Nord\n Europe), Gilles Stoltz (DMA, GREGH, INRIA Paris - Rocquencourt), Csaba\n Szepesvari"
] |
cs.LG | 10.1016/j.eij.2011.02.007 | 1001.5007 | null | null | http://arxiv.org/abs/1001.5007v2 | 2010-01-27T21:23:03Z | 2010-01-27T19:24:33Z | Trajectory Clustering and an Application to Airspace Monitoring | This paper presents a framework aimed at monitoring the behavior of aircraft
in a given airspace. Nominal trajectories are determined and learned using data
driven methods. Standard procedures are used by air traffic controllers (ATC)
to guide aircraft, ensure the safety of the airspace, and to maximize the
runway occupancy. Even though standard procedures are used by ATC, the control
of the aircraft remains with the pilots, leading to a large variability in the
flight patterns observed. Two methods to identify typical operations and their
variability from recorded radar tracks are presented. This knowledge base is
then used to monitor the conformance of current operations against operations
previously identified as standard. A tool called AirTrajectoryMiner is
presented, aiming at monitoring the instantaneous health of the airspace, in
real time. The airspace is "healthy" when all aircraft are flying according to
the nominal procedures. A measure of complexity is introduced, measuring the
conformance of current flight to nominal flight patterns. When an aircraft does
not conform, the complexity increases as more attention from ATC is required to
ensure a safe separation between aircraft.
| [
"Maxime Gariel, Ashok N. Srivastava, Eric Feron",
"['Maxime Gariel' 'Ashok N. Srivastava' 'Eric Feron']"
] |
cs.NE cs.LG | null | 1001.5348 | null | null | http://arxiv.org/pdf/1001.5348v1 | 2010-01-29T08:10:26Z | 2010-01-29T08:10:26Z | Performance Comparisons of PSO based Clustering | In this paper we have investigated the performance of PSO Particle Swarm
Optimization based clustering on few real world data sets and one artificial
data set. The performances are measured by two metric namely quantization error
and inter-cluster distance. The K means clustering algorithm is first
implemented for all data sets, the results of which form the basis of
comparison of PSO based approaches. We have explored different variants of PSO
such as gbest, lbest ring, lbest vonneumann and Hybrid PSO for comparison
purposes. The results reveal that PSO based clustering algorithms perform
better compared to K means in all data sets.
| [
"Suresh Chandra Satapathy, Gunanidhi Pradhan, Sabyasachi Pattnaik,\n J.V.R. Murthy, P.V.G.D. Prasad Reddy",
"['Suresh Chandra Satapathy' 'Gunanidhi Pradhan' 'Sabyasachi Pattnaik'\n 'J. V. R. Murthy' 'P. V. G. D. Prasad Reddy']"
] |
cs.CV cs.DB cs.LG | null | 1002.0383 | null | null | http://arxiv.org/pdf/1002.0383v1 | 2010-02-02T02:30:22Z | 2010-02-02T02:30:22Z | Feature Level Clustering of Large Biometric Database | This paper proposes an efficient technique for partitioning large biometric
database during identification. In this technique feature vector which
comprises of global and local descriptors extracted from offline signature are
used by fuzzy clustering technique to partition the database. As biometric
features posses no natural order of sorting, thus it is difficult to index them
alphabetically or numerically. Hence, some supervised criteria is required to
partition the search space. At the time of identification the fuzziness
criterion is introduced to find the nearest clusters for declaring the identity
of query sample. The system is tested using bin-miss rate and performs better
in comparison to traditional k-means approach.
| [
"['Hunny Mehrotra' 'Dakshina Ranjan Kisku' 'V. Bhawani Radhika'\n 'Banshidhar Majhi' 'Phalguni Gupta']",
"Hunny Mehrotra, Dakshina Ranjan Kisku, V. Bhawani Radhika, Banshidhar\n Majhi, Phalguni Gupta"
] |
cs.CV cs.LG | null | 1002.0416 | null | null | http://arxiv.org/pdf/1002.0416v1 | 2010-02-02T08:15:20Z | 2010-02-02T08:15:20Z | Fusion of Multiple Matchers using SVM for Offline Signature
Identification | This paper uses Support Vector Machines (SVM) to fuse multiple classifiers
for an offline signature system. From the signature images, global and local
features are extracted and the signatures are verified with the help of
Gaussian empirical rule, Euclidean and Mahalanobis distance based classifiers.
SVM is used to fuse matching scores of these matchers. Finally, recognition of
query signatures is done by comparing it with all signatures of the database.
The proposed system is tested on a signature database contains 5400 offline
signatures of 600 individuals and the results are found to be promising.
| [
"Dakshina Ranjan Kisku, Phalguni Gupta, Jamuna Kanta Sing",
"['Dakshina Ranjan Kisku' 'Phalguni Gupta' 'Jamuna Kanta Sing']"
] |
cs.LG | null | 1002.0709 | null | null | http://arxiv.org/pdf/1002.0709v1 | 2010-02-03T11:31:24Z | 2010-02-03T11:31:24Z | Aggregating Algorithm competing with Banach lattices | The paper deals with on-line regression settings with signals belonging to a
Banach lattice. Our algorithms work in a semi-online setting where all the
inputs are known in advance and outcomes are unknown and given step by step. We
apply the Aggregating Algorithm to construct a prediction method whose
cumulative loss over all the input vectors is comparable with the cumulative
loss of any linear functional on the Banach lattice. As a by-product we get an
algorithm that takes signals from an arbitrary domain. Its cumulative loss is
comparable with the cumulative loss of any predictor function from Besov and
Triebel-Lizorkin spaces. We describe several applications of our setting.
| [
"Fedor Zhdanov, Alexey Chernov and Yuri Kalnishkan",
"['Fedor Zhdanov' 'Alexey Chernov' 'Yuri Kalnishkan']"
] |
stat.AP cs.LG stat.ML | 10.1109/ALLERTON.2016.7852262 | 1002.0747 | null | null | http://arxiv.org/abs/1002.0747v3 | 2016-06-26T01:23:28Z | 2010-02-03T14:11:06Z | Efficient Bayesian Learning in Social Networks with Gaussian Estimators | We consider a group of Bayesian agents who try to estimate a state of the
world $\theta$ through interaction on a social network. Each agent $v$
initially receives a private measurement of $\theta$: a number $S_v$ picked
from a Gaussian distribution with mean $\theta$ and standard deviation one.
Then, in each discrete time iteration, each reveals its estimate of $\theta$ to
its neighbors, and, observing its neighbors' actions, updates its belief using
Bayes' Law.
This process aggregates information efficiently, in the sense that all the
agents converge to the belief that they would have, had they access to all the
private measurements. We show that this process is computationally efficient,
so that each agent's calculation can be easily carried out. We also show that
on any graph the process converges after at most $2N \cdot D$ steps, where $N$
is the number of agents and $D$ is the diameter of the network. Finally, we
show that on trees and on distance transitive-graphs the process converges
after $D$ steps, and that it preserves privacy, so that agents learn very
little about the private signal of most other agents, despite the efficient
aggregation of information. Our results extend those in an unpublished
manuscript of the first and last authors.
| [
"Elchanan Mossel and Noah Olsman and Omer Tamuz",
"['Elchanan Mossel' 'Noah Olsman' 'Omer Tamuz']"
] |
cs.IT cs.LG math.IT math.ST stat.TH | null | 1002.0757 | null | null | http://arxiv.org/pdf/1002.0757v1 | 2010-02-03T15:11:21Z | 2010-02-03T15:11:21Z | Prequential Plug-In Codes that Achieve Optimal Redundancy Rates even if
the Model is Wrong | We analyse the prequential plug-in codes relative to one-parameter
exponential families M. We show that if data are sampled i.i.d. from some
distribution outside M, then the redundancy of any plug-in prequential code
grows at rate larger than 1/2 ln(n) in the worst case. This means that plug-in
codes, such as the Rissanen-Dawid ML code, may behave inferior to other
important universal codes such as the 2-part MDL, Shtarkov and Bayes codes, for
which the redundancy is always 1/2 ln(n) + O(1). However, we also show that a
slight modification of the ML plug-in code, "almost" in the model, does achieve
the optimal redundancy even if the the true distribution is outside M.
| [
"Peter Gr\\\"unwald, Wojciech Kot{\\l}owski",
"['Peter Grünwald' 'Wojciech Kotłowski']"
] |
cs.LG | null | 1002.1144 | null | null | http://arxiv.org/pdf/1002.1144v1 | 2010-02-05T08:27:17Z | 2010-02-05T08:27:17Z | A CHAID Based Performance Prediction Model in Educational Data Mining | The performance in higher secondary school education in India is a turning
point in the academic lives of all students. As this academic performance is
influenced by many factors, it is essential to develop predictive data mining
model for students' performance so as to identify the slow learners and study
the influence of the dominant factors on their academic performance. In the
present investigation, a survey cum experimental methodology was adopted to
generate a database and it was constructed from a primary and a secondary
source. While the primary data was collected from the regular students, the
secondary data was gathered from the school and office of the Chief Educational
Officer (CEO). A total of 1000 datasets of the year 2006 from five different
schools in three different districts of Tamilnadu were collected. The raw data
was preprocessed in terms of filling up missing values, transforming values in
one form into another and relevant attribute/ variable selection. As a result,
we had 772 student records, which were used for CHAID prediction model
construction. A set of prediction rules were extracted from CHIAD prediction
model and the efficiency of the generated CHIAD prediction model was found. The
accuracy of the present model was compared with other model and it has been
found to be satisfactory.
| [
"M. Ramaswami, R. Bhaskaran",
"['M. Ramaswami' 'R. Bhaskaran']"
] |
cs.LG | null | 1002.1156 | null | null | http://arxiv.org/pdf/1002.1156v1 | 2010-02-05T08:59:05Z | 2010-02-05T08:59:05Z | Dimensionality Reduction: An Empirical Study on the Usability of IFE-CF
(Independent Feature Elimination- by C-Correlation and F-Correlation)
Measures | The recent increase in dimensionality of data has thrown a great challenge to
the existing dimensionality reduction methods in terms of their effectiveness.
Dimensionality reduction has emerged as one of the significant preprocessing
steps in machine learning applications and has been effective in removing
inappropriate data, increasing learning accuracy, and improving
comprehensibility. Feature redundancy exercises great influence on the
performance of classification process. Towards the better classification
performance, this paper addresses the usefulness of truncating the highly
correlated and redundant attributes. Here, an effort has been made to verify
the utility of dimensionality reduction by applying LVQ (Learning Vector
Quantization) method on two Benchmark datasets of 'Pima Indian Diabetic
patients' and 'Lung cancer patients'.
| [
"['M. Babu Reddy' 'L. S. S. Reddy']",
"M. Babu Reddy, L. S. S. Reddy"
] |
cs.AI cs.LG cs.RO | null | 1002.1480 | null | null | http://arxiv.org/pdf/1002.1480v1 | 2010-02-07T19:58:46Z | 2010-02-07T19:58:46Z | A Minimum Relative Entropy Controller for Undiscounted Markov Decision
Processes | Adaptive control problems are notoriously difficult to solve even in the
presence of plant-specific controllers. One way to by-pass the intractable
computation of the optimal policy is to restate the adaptive control as the
minimization of the relative entropy of a controller that ignores the true
plant dynamics from an informed controller. The solution is given by the
Bayesian control rule-a set of equations characterizing a stochastic adaptive
controller for the class of possible plant dynamics. Here, the Bayesian control
rule is applied to derive BCR-MDP, a controller to solve undiscounted Markov
decision processes with finite state and action spaces and unknown dynamics. In
particular, we derive a non-parametric conjugate prior distribution over the
policy space that encapsulates the agent's whole relevant history and we
present a Gibbs sampler to draw random policies from this distribution.
Preliminary results show that BCR-MDP successfully avoids sub-optimal limit
cycles due to its built-in mechanism to balance exploration versus
exploitation.
| [
"['Pedro A. Ortega' 'Daniel A. Braun']",
"Pedro A. Ortega, Daniel A. Braun"
] |
cs.LG | null | 1002.1782 | null | null | http://arxiv.org/pdf/1002.1782v3 | 2010-05-13T03:32:05Z | 2010-02-09T07:32:59Z | Online Distributed Sensor Selection | A key problem in sensor networks is to decide which sensors to query when, in
order to obtain the most useful information (e.g., for performing accurate
prediction), subject to constraints (e.g., on power and bandwidth). In many
applications the utility function is not known a priori, must be learned from
data, and can even change over time. Furthermore for large sensor networks
solving a centralized optimization problem to select sensors is not feasible,
and thus we seek a fully distributed solution. In this paper, we present
Distributed Online Greedy (DOG), an efficient, distributed algorithm for
repeatedly selecting sensors online, only receiving feedback about the utility
of the selected sensors. We prove very strong theoretical no-regret guarantees
that apply whenever the (unknown) utility function satisfies a natural
diminishing returns property called submodularity. Our algorithm has extremely
low communication requirements, and scales well to large sensor deployments. We
extend DOG to allow observation-dependent sensor selection. We empirically
demonstrate the effectiveness of our algorithm on several real-world sensing
tasks.
| [
"Daniel Golovin, Matthew Faulkner and Andreas Krause",
"['Daniel Golovin' 'Matthew Faulkner' 'Andreas Krause']"
] |
cs.LG | null | 1002.2044 | null | null | http://arxiv.org/pdf/1002.2044v1 | 2010-02-10T09:08:56Z | 2010-02-10T09:08:56Z | On the Stability of Empirical Risk Minimization in the Presence of
Multiple Risk Minimizers | Recently Kutin and Niyogi investigated several notions of algorithmic
stability--a property of a learning map conceptually similar to
continuity--showing that training-stability is sufficient for consistency of
Empirical Risk Minimization while distribution-free CV-stability is necessary
and sufficient for having finite VC-dimension. This paper concerns a phase
transition in the training stability of ERM, conjectured by the same authors.
Kutin and Niyogi proved that ERM on finite hypothesis spaces containing a
unique risk minimizer has training stability that scales exponentially with
sample size, and conjectured that the existence of multiple risk minimizers
prevents even super-quadratic convergence. We prove this result for the
strictly weaker notion of CV-stability, positively resolving the conjecture.
| [
"Benjamin I. P. Rubinstein, Aleksandr Simma",
"['Benjamin I. P. Rubinstein' 'Aleksandr Simma']"
] |
cs.CV cs.LG | null | 1002.2050 | null | null | http://arxiv.org/pdf/1002.2050v1 | 2010-02-10T10:16:57Z | 2010-02-10T10:16:57Z | Intrinsic dimension estimation of data by principal component analysis | Estimating intrinsic dimensionality of data is a classic problem in pattern
recognition and statistics. Principal Component Analysis (PCA) is a powerful
tool in discovering dimensionality of data sets with a linear structure; it,
however, becomes ineffective when data have a nonlinear structure. In this
paper, we propose a new PCA-based method to estimate intrinsic dimension of
data with nonlinear structures. Our method works by first finding a minimal
cover of the data set, then performing PCA locally on each subset in the cover
and finally giving the estimation result by checking up the data variance on
all small neighborhood regions. The proposed method utilizes the whole data set
to estimate its intrinsic dimension and is convenient for incremental learning.
In addition, our new PCA procedure can filter out noise in data and converge to
a stable estimation with the neighborhood region size increasing. Experiments
on synthetic and real world data sets show effectiveness of the proposed
method.
| [
"['Mingyu Fan' 'Nannan Gu' 'Hong Qiao' 'Bo Zhang']",
"Mingyu Fan, Nannan Gu, Hong Qiao, Bo Zhang"
] |
q-fin.TR cs.LG cs.MA | null | 1002.2171 | null | null | http://arxiv.org/pdf/1002.2171v1 | 2010-02-10T18:48:43Z | 2010-02-10T18:48:43Z | Reverse Engineering Financial Markets with Majority and Minority Games
using Genetic Algorithms | Using virtual stock markets with artificial interacting software investors,
aka agent-based models (ABMs), we present a method to reverse engineer
real-world financial time series. We model financial markets as made of a large
number of interacting boundedly rational agents. By optimizing the similarity
between the actual data and that generated by the reconstructed virtual stock
market, we obtain parameters and strategies, which reveal some of the inner
workings of the target stock market. We validate our approach by out-of-sample
predictions of directional moves of the Nasdaq Composite Index.
| [
"['J. Wiesinger' 'D. Sornette' 'J. Satinover']",
"J. Wiesinger, D. Sornette, J. Satinover"
] |
cs.IT cs.AI cs.LG math.IT | null | 1002.2240 | null | null | http://arxiv.org/pdf/1002.2240v1 | 2010-02-10T23:19:56Z | 2010-02-10T23:19:56Z | A Generalization of the Chow-Liu Algorithm and its Application to
Statistical Learning | We extend the Chow-Liu algorithm for general random variables while the
previous versions only considered finite cases. In particular, this paper
applies the generalization to Suzuki's learning algorithm that generates from
data forests rather than trees based on the minimum description length by
balancing the fitness of the data to the forest and the simplicity of the
forest. As a result, we successfully obtain an algorithm when both of the
Gaussian and finite random variables are present.
| [
"Joe Suzuki",
"['Joe Suzuki']"
] |
cs.LG cs.CY | null | 1002.2425 | null | null | http://arxiv.org/pdf/1002.2425v1 | 2010-02-11T20:41:28Z | 2010-02-11T20:41:28Z | Application of k Means Clustering algorithm for prediction of Students
Academic Performance | The ability to monitor the progress of students academic performance is a
critical issue to the academic community of higher learning. A system for
analyzing students results based on cluster analysis and uses standard
statistical algorithms to arrange their scores data according to the level of
their performance is described. In this paper, we also implemented k mean
clustering algorithm for analyzing students result data. The model was combined
with the deterministic model to analyze the students results of a private
Institution in Nigeria which is a good benchmark to monitor the progression of
academic performance of students in higher Institution for the purpose of
making an effective decision by the academic planners.
| [
"O. J. Oyelade, O. O. Oladipupo, I. C. Obagbuwa",
"['O. J. Oyelade' 'O. O. Oladipupo' 'I. C. Obagbuwa']"
] |
cs.LG | null | 1002.2780 | null | null | http://arxiv.org/pdf/1002.2780v1 | 2010-02-14T16:37:04Z | 2010-02-14T16:37:04Z | Collaborative Filtering in a Non-Uniform World: Learning with the
Weighted Trace Norm | We show that matrix completion with trace-norm regularization can be
significantly hurt when entries of the matrix are sampled non-uniformly. We
introduce a weighted version of the trace-norm regularizer that works well also
with non-uniform sampling. Our experimental results demonstrate that the
weighted trace-norm regularization indeed yields significant gains on the
(highly non-uniformly sampled) Netflix dataset.
| [
"Ruslan Salakhutdinov, Nathan Srebro",
"['Ruslan Salakhutdinov' 'Nathan Srebro']"
] |
cs.AI cs.LG | null | 1002.3086 | null | null | http://arxiv.org/pdf/1002.3086v1 | 2010-02-16T14:14:59Z | 2010-02-16T14:14:59Z | Convergence of Bayesian Control Rule | Recently, new approaches to adaptive control have sought to reformulate the
problem as a minimization of a relative entropy criterion to obtain tractable
solutions. In particular, it has been shown that minimizing the expected
deviation from the causal input-output dependencies of the true plant leads to
a new promising stochastic control rule called the Bayesian control rule. This
work proves the convergence of the Bayesian control rule under two sufficient
assumptions: boundedness, which is an ergodicity condition; and consistency,
which is an instantiation of the sure-thing principle.
| [
"['Pedro A. Ortega' 'Daniel A. Braun']",
"Pedro A. Ortega, Daniel A. Braun"
] |
cs.LG cs.AI | 10.1109/ISCC.2008.4625611 | 1002.3174 | null | null | http://arxiv.org/abs/1002.3174v3 | 2012-03-16T21:31:17Z | 2010-02-17T10:18:07Z | A new approach to content-based file type detection | File type identification and file type clustering may be difficult tasks that
have an increasingly importance in the field of computer and network security.
Classical methods of file type detection including considering file extensions
and magic bytes can be easily spoofed. Content-based file type detection is a
newer way that is taken into account recently. In this paper, a new
content-based method for the purpose of file type detection and file type
clustering is proposed that is based on the PCA and neural networks. The
proposed method has a good accuracy and is fast enough.
| [
"M. C. Amirani, M. Toorani, A. A. Beheshti",
"['M. C. Amirani' 'M. Toorani' 'A. A. Beheshti']"
] |