categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
stat.ML cs.LG math.OC stat.AP stat.CO | 10.1214/11-AOAS514 | 1005.4717 | null | null | http://arxiv.org/abs/1005.4717v4 | 2012-06-29T05:53:50Z | 2010-05-26T00:50:17Z | Smoothing proximal gradient method for general structured sparse
regression | We study the problem of estimating high-dimensional regression models
regularized by a structured sparsity-inducing penalty that encodes prior
structural information on either the input or output variables. We consider two
widely adopted types of penalties of this kind as motivating examples: (1) the
general overlapping-group-lasso penalty, generalized from the group-lasso
penalty; and (2) the graph-guided-fused-lasso penalty, generalized from the
fused-lasso penalty. For both types of penalties, due to their nonseparability
and nonsmoothness, developing an efficient optimization method remains a
challenging problem. In this paper we propose a general optimization approach,
the smoothing proximal gradient (SPG) method, which can solve structured sparse
regression problems with any smooth convex loss under a wide spectrum of
structured sparsity-inducing penalties. Our approach combines a smoothing
technique with an effective proximal gradient method. It achieves a convergence
rate significantly faster than the standard first-order methods, subgradient
methods, and is much more scalable than the most widely used interior-point
methods. The efficiency and scalability of our method are demonstrated on both
simulation experiments and real genetic data sets.
| [
"['Xi Chen' 'Qihang Lin' 'Seyoung Kim' 'Jaime G. Carbonell' 'Eric P. Xing']",
"Xi Chen, Qihang Lin, Seyoung Kim, Jaime G. Carbonell, Eric P. Xing"
] |
cs.LG cs.IR | 10.1109/TNNLS.2014.2333876 | 1005.5141 | null | null | null | null | null | On Recursive Edit Distance Kernels with Application to Time Series
Classification | This paper proposes some extensions to the work on kernels dedicated to
string or time series global alignment based on the aggregation of scores
obtained by local alignments. The extensions we propose allow to construct,
from classical recursive definition of elastic distances, recursive edit
distance (or time-warp) kernels that are positive definite if some sufficient
conditions are satisfied. The sufficient conditions we end-up with are original
and weaker than those proposed in earlier works, although a recursive
regularizing term is required to get the proof of the positive definiteness as
a direct consequence of the Haussler's convolution theorem. The classification
experiment we conducted on three classical time warp distances (two of which
being metrics), using Support Vector Machine classifier, leads to conclude
that, when the pairwise distance matrix obtained from the training data is
\textit{far} from definiteness, the positive definite recursive elastic kernels
outperform in general the distance substituting kernels for the classical
elastic distances we have tested.
| [
"Pierre-Fran\\c{c}ois Marteau (IRISA), Sylvie Gibet (IRISA)"
] |
null | null | 1005.5141v | null | null | http://arxiv.org/abs/1005.5141v12 | 2014-05-26T06:17:30Z | 2010-05-27T18:11:15Z | On Recursive Edit Distance Kernels with Application to Time Series
Classification | This paper proposes some extensions to the work on kernels dedicated to string or time series global alignment based on the aggregation of scores obtained by local alignments. The extensions we propose allow to construct, from classical recursive definition of elastic distances, recursive edit distance (or time-warp) kernels that are positive definite if some sufficient conditions are satisfied. The sufficient conditions we end-up with are original and weaker than those proposed in earlier works, although a recursive regularizing term is required to get the proof of the positive definiteness as a direct consequence of the Haussler's convolution theorem. The classification experiment we conducted on three classical time warp distances (two of which being metrics), using Support Vector Machine classifier, leads to conclude that, when the pairwise distance matrix obtained from the training data is textit{far} from definiteness, the positive definite recursive elastic kernels outperform in general the distance substituting kernels for the classical elastic distances we have tested. | [
"['Pierre-François Marteau' 'Sylvie Gibet']"
] |
cs.LG math.CV | null | 1005.5170 | null | null | http://arxiv.org/pdf/1005.5170v1 | 2010-05-25T16:07:25Z | 2010-05-25T16:07:25Z | Wirtinger's Calculus in general Hilbert Spaces | The present report, has been inspired by the need of the author and its
colleagues to understand the underlying theory of Wirtinger's Calculus and to
further extend it to include the kernel case. The aim of the present manuscript
is twofold: a) it endeavors to provide a more rigorous presentation of the
related material, focusing on aspects that the author finds more insightful and
b) it extends the notions of Wirtinger's calculus on general Hilbert spaces
(such as Reproducing Hilbert Kernel Spaces).
| [
"['P. Bouboulis']",
"P. Bouboulis"
] |
cs.LG cs.DS | null | 1005.5197 | null | null | http://arxiv.org/pdf/1005.5197v2 | 2012-09-01T22:23:31Z | 2010-05-28T00:37:22Z | Ranked bandits in metric spaces: learning optimally diverse rankings
over large document collections | Most learning to rank research has assumed that the utility of different
documents is independent, which results in learned ranking functions that
return redundant results. The few approaches that avoid this have rather
unsatisfyingly lacked theoretical foundations, or do not scale. We present a
learning-to-rank formulation that optimizes the fraction of satisfied users,
with several scalable algorithms that explicitly takes document similarity and
ranking context into account. Our formulation is a non-trivial common
generalization of two multi-armed bandit models from the literature: "ranked
bandits" (Radlinski et al., ICML 2008) and "Lipschitz bandits" (Kleinberg et
al., STOC 2008). We present theoretical justifications for this approach, as
well as a near-optimal algorithm. Our evaluation adds optimizations that
improve empirical performance, and shows that our algorithms learn orders of
magnitude more quickly than previous approaches.
| [
"Aleksandrs Slivkins, Filip Radlinski and Sreenivas Gollapudi",
"['Aleksandrs Slivkins' 'Filip Radlinski' 'Sreenivas Gollapudi']"
] |
cs.CL cs.AI cs.HC cs.LG | null | 1005.5253 | null | null | http://arxiv.org/pdf/1005.5253v1 | 2010-05-28T09:41:50Z | 2010-05-28T09:41:50Z | Using Soft Constraints To Learn Semantic Models Of Descriptions Of
Shapes | The contribution of this paper is to provide a semantic model (using soft
constraints) of the words used by web-users to describe objects in a language
game; a game in which one user describes a selected object of those composing
the scene, and another user has to guess which object has been described. The
given description needs to be non ambiguous and accurate enough to allow other
users to guess the described shape correctly.
To build these semantic models the descriptions need to be analyzed to
extract the syntax and words' classes used. We have modeled the meaning of
these descriptions using soft constraints as a way for grounding the meaning.
The descriptions generated by the system took into account the context of the
object to avoid ambiguous descriptions, and allowed users to guess the
described object correctly 72% of the times.
| [
"Sergio Guadarrama (1) and David P. Pancho (1) ((1) European Centre for\n Soft Computing)",
"['Sergio Guadarrama' 'David P. Pancho']"
] |
cs.LG stat.ML | null | 1005.5337 | null | null | http://arxiv.org/pdf/1005.5337v1 | 2010-05-28T17:25:05Z | 2010-05-28T17:25:05Z | Using a Kernel Adatron for Object Classification with RCS Data | Rapid identification of object from radar cross section (RCS) signals is
important for many space and military applications. This identification is a
problem in pattern recognition which either neural networks or support vector
machines should prove to be high-speed. Bayesian networks would also provide
value but require significant preprocessing of the signals. In this paper, we
describe the use of a support vector machine for object identification from
synthesized RCS data. Our best results are from data fusion of X-band and
S-band signals, where we obtained 99.4%, 95.3%, 100% and 95.6% correct
identification for cylinders, frusta, spheres, and polygons, respectively. We
also compare our results with a Bayesian approach and show that the SVM is
three orders of magnitude faster, as measured by the number of floating point
operations.
| [
"['Marten F. Byl' 'James T. Demers' 'Edward A. Rietman']",
"Marten F. Byl, James T. Demers, and Edward A. Rietman"
] |
cs.LG | null | 1005.5462 | null | null | http://arxiv.org/pdf/1005.5462v2 | 2010-06-12T10:40:53Z | 2010-05-29T15:27:16Z | On the clustering aspect of nonnegative matrix factorization | This paper provides a theoretical explanation on the clustering aspect of
nonnegative matrix factorization (NMF). We prove that even without imposing
orthogonality nor sparsity constraint on the basis and/or coefficient matrix,
NMF still can give clustering results, thus providing a theoretical support for
many works, e.g., Xu et al. [1] and Kim et al. [2], that show the superiority
of the standard NMF as a clustering method.
| [
"['Andri Mirzal' 'Masashi Furukawa']",
"Andri Mirzal and Masashi Furukawa"
] |
cs.LG cs.AI cs.NE | null | 1005.5556 | null | null | http://arxiv.org/pdf/1005.5556v2 | 2010-06-03T14:45:54Z | 2010-05-30T19:28:01Z | Empirical learning aided by weak domain knowledge in the form of feature
importance | Standard hybrid learners that use domain knowledge require stronger knowledge
that is hard and expensive to acquire. However, weaker domain knowledge can
benefit from prior knowledge while being cost effective. Weak knowledge in the
form of feature relative importance (FRI) is presented and explained. Feature
relative importance is a real valued approximation of a feature's importance
provided by experts. Advantage of using this knowledge is demonstrated by IANN,
a modified multilayer neural network algorithm. IANN is a very simple
modification of standard neural network algorithm but attains significant
performance gains. Experimental results in the field of molecular biology show
higher performance over other empirical learning algorithms including standard
backpropagation and support vector machines. IANN performance is even
comparable to a theory refinement system KBANN that uses stronger domain
knowledge. This shows Feature relative importance can improve performance of
existing empirical learning algorithms significantly with minimal effort.
| [
"['Ridwan Al Iqbal']",
"Ridwan Al Iqbal"
] |
cs.LG | null | 1005.5581 | null | null | http://arxiv.org/pdf/1005.5581v2 | 2010-10-29T09:44:44Z | 2010-05-31T03:59:35Z | Multi-View Active Learning in the Non-Realizable Case | The sample complexity of active learning under the realizability assumption
has been well-studied. The realizability assumption, however, rarely holds in
practice. In this paper, we theoretically characterize the sample complexity of
active learning in the non-realizable case under multi-view setting. We prove
that, with unbounded Tsybakov noise, the sample complexity of multi-view active
learning can be $\widetilde{O}(\log\frac{1}{\epsilon})$, contrasting to
single-view setting where the polynomial improvement is the best possible
achievement. We also prove that in general multi-view setting the sample
complexity of active learning with unbounded Tsybakov noise is
$\widetilde{O}(\frac{1}{\epsilon})$, where the order of $1/\epsilon$ is
independent of the parameter in Tsybakov noise, contrasting to previous
polynomial bounds where the order of $1/\epsilon$ is related to the parameter
in Tsybakov noise.
| [
"Wei Wang, Zhi-Hua Zhou",
"['Wei Wang' 'Zhi-Hua Zhou']"
] |
cs.LG cs.IT math.IT math.ST stat.TH | null | 1005.5603 | null | null | http://arxiv.org/pdf/1005.5603v3 | 2014-12-27T00:16:49Z | 2010-05-31T06:58:11Z | On the Relation between Realizable and Nonrealizable Cases of the
Sequence Prediction Problem | A sequence $x_1,\dots,x_n,\dots$ of discrete-valued observations is generated
according to some unknown probabilistic law (measure) $\mu$. After observing
each outcome, one is required to give conditional probabilities of the next
observation. The realizable case is when the measure $\mu$ belongs to an
arbitrary but known class $\mathcal C$ of process measures. The non-realizable
case is when $\mu$ is completely arbitrary, but the prediction performance is
measured with respect to a given set $\mathcal C$ of process measures. We are
interested in the relations between these problems and between their solutions,
as well as in characterizing the cases when a solution exists and finding these
solutions. We show that if the quality of prediction is measured using the
total variation distance, then these problems coincide, while if it is measured
using the expected average KL divergence, then they are different. For some of
the formalizations we also show that when a solution exists, it can be obtained
as a Bayes mixture over a countable subset of $\mathcal C$. We also obtain
several characterization of those sets $\mathcal C$ for which solutions to the
considered problems exist. As an illustration to the general results obtained,
we show that a solution to the non-realizable case of the sequence prediction
problem exists for the set of all finite-memory processes, but does not exist
for the set of all stationary processes.
It should be emphasized that the framework is completely general: the
processes measures considered are not required to be i.i.d., mixing,
stationary, or to belong to any parametric family.
| [
"Daniil Ryabko (INRIA Lille)",
"['Daniil Ryabko']"
] |
cs.IT cs.LG math.IT stat.ML | null | 1006.0375 | null | null | http://arxiv.org/pdf/1006.0375v1 | 2010-06-02T13:47:12Z | 2010-06-02T13:47:12Z | Information theoretic model validation for clustering | Model selection in clustering requires (i) to specify a suitable clustering
principle and (ii) to control the model order complexity by choosing an
appropriate number of clusters depending on the noise level in the data. We
advocate an information theoretic perspective where the uncertainty in the
measurements quantizes the set of data partitionings and, thereby, induces
uncertainty in the solution space of clusterings. A clustering model, which can
tolerate a higher level of fluctuations in the measurements than alternative
models, is considered to be superior provided that the clustering solution is
equally informative. This tradeoff between \emph{informativeness} and
\emph{robustness} is used as a model selection criterion. The requirement that
data partitionings should generalize from one data set to an equally probable
second data set gives rise to a new notion of structure induced information.
| [
"Joachim M. Buhmann",
"['Joachim M. Buhmann']"
] |
cs.LG | null | 1006.0475 | null | null | http://arxiv.org/pdf/1006.0475v1 | 2010-06-02T19:41:27Z | 2010-06-02T19:41:27Z | Prediction with Advice of Unknown Number of Experts | In the framework of prediction with expert advice, we consider a recently
introduced kind of regret bounds: the bounds that depend on the effective
instead of nominal number of experts. In contrast to the NormalHedge bound,
which mainly depends on the effective number of experts and also weakly depends
on the nominal one, we obtain a bound that does not contain the nominal number
of experts at all. We use the defensive forecasting method and introduce an
application of defensive forecasting to multivalued supermartingales.
| [
"Alexey Chernov and Vladimir Vovk",
"['Alexey Chernov' 'Vladimir Vovk']"
] |
cs.LG | 10.1109/GrC.2010.102 | 1006.1129 | null | null | http://arxiv.org/abs/1006.1129v2 | 2010-08-22T23:26:20Z | 2010-06-06T18:21:06Z | Predictive PAC learnability: a paradigm for learning from exchangeable
input data | Exchangeable random variables form an important and well-studied
generalization of i.i.d. variables, however simple examples show that no
nontrivial concept or function classes are PAC learnable under general
exchangeable data inputs $X_1,X_2,\ldots$. Inspired by the work of Berti and
Rigo on a Glivenko--Cantelli theorem for exchangeable inputs, we propose a new
paradigm, adequate for learning from exchangeable data: predictive PAC
learnability. A learning rule $\mathcal L$ for a function class $\mathscr F$ is
predictive PAC if for every $\e,\delta>0$ and each function $f\in {\mathscr
F}$, whenever $\abs{\sigma}\geq s(\delta,\e)$, we have with confidence
$1-\delta$ that the expected difference between $f(X_{n+1})$ and the image of
$f\vert\sigma$ under $\mathcal L$ does not exceed $\e$ conditionally on
$X_1,X_2,\ldots,X_n$. Thus, instead of learning the function $f$ as such, we
are learning to a given accuracy $\e$ the predictive behaviour of $f$ at the
future points $X_i(\omega)$, $i>n$ of the sample path. Using de Finetti's
theorem, we show that if a universally separable function class $\mathscr F$ is
distribution-free PAC learnable under i.i.d. inputs, then it is
distribution-free predictive PAC learnable under exchangeable inputs, with a
slightly worse sample complexity.
| [
"['Vladimir Pestov']",
"Vladimir Pestov"
] |
cs.LG stat.ML | null | 1006.1138 | null | null | http://arxiv.org/pdf/1006.1138v3 | 2014-08-12T16:44:00Z | 2010-06-06T21:05:27Z | Online Learning via Sequential Complexities | We consider the problem of sequential prediction and provide tools to study
the minimax value of the associated game. Classical statistical learning theory
provides several useful complexity measures to study learning with i.i.d. data.
Our proposed sequential complexities can be seen as extensions of these
measures to the sequential setting. The developed theory is shown to yield
precise learning guarantees for the problem of sequential prediction. In
particular, we show necessary and sufficient conditions for online learnability
in the setting of supervised learning. Several examples show the utility of our
framework: we can establish learnability without having to exhibit an explicit
online learning algorithm.
| [
"Alexander Rakhlin, Karthik Sridharan, Ambuj Tewari",
"['Alexander Rakhlin' 'Karthik Sridharan' 'Ambuj Tewari']"
] |
cs.LG | null | 1006.1288 | null | null | http://arxiv.org/pdf/1006.1288v2 | 2011-01-31T09:59:44Z | 2010-06-07T16:20:02Z | Regression on fixed-rank positive semidefinite matrices: a Riemannian
approach | The paper addresses the problem of learning a regression model parameterized
by a fixed-rank positive semidefinite matrix. The focus is on the nonlinear
nature of the search space and on scalability to high-dimensional problems. The
mathematical developments rely on the theory of gradient descent algorithms
adapted to the Riemannian geometry that underlies the set of fixed-rank
positive semidefinite matrices. In contrast with previous contributions in the
literature, no restrictions are imposed on the range space of the learned
matrix. The resulting algorithms maintain a linear complexity in the problem
size and enjoy important invariance properties. We apply the proposed
algorithms to the problem of learning a distance function parameterized by a
positive semidefinite matrix. Good performance is observed on classical
benchmarks.
| [
"['Gilles Meyer' 'Silvere Bonnabel' 'Rodolphe Sepulchre']",
"Gilles Meyer, Silvere Bonnabel, Rodolphe Sepulchre"
] |
cs.LG cs.AI stat.AP stat.ML | null | 1006.1328 | null | null | http://arxiv.org/pdf/1006.1328v1 | 2010-06-07T18:45:46Z | 2010-06-07T18:45:46Z | Uncovering the Riffled Independence Structure of Rankings | Representing distributions over permutations can be a daunting task due to
the fact that the number of permutations of $n$ objects scales factorially in
$n$. One recent way that has been used to reduce storage complexity has been to
exploit probabilistic independence, but as we argue, full independence
assumptions impose strong sparsity constraints on distributions and are
unsuitable for modeling rankings. We identify a novel class of independence
structures, called \emph{riffled independence}, encompassing a more expressive
family of distributions while retaining many of the properties necessary for
performing efficient inference and reducing sample complexity. In riffled
independence, one draws two permutations independently, then performs the
\emph{riffle shuffle}, common in card games, to combine the two permutations to
form a single permutation. Within the context of ranking, riffled independence
corresponds to ranking disjoint sets of objects independently, then
interleaving those rankings. In this paper, we provide a formal introduction to
riffled independence and present algorithms for using riffled independence
within Fourier-theoretic frameworks which have been explored by a number of
recent papers. Additionally, we propose an automated method for discovering
sets of items which are riffle independent from a training set of rankings. We
show that our clustering-like algorithms can be used to discover meaningful
latent coalitions from real preference ranking datasets and to learn the
structure of hierarchically decomposable models based on riffled independence.
| [
"Jonathan Huang and Carlos Guestrin",
"['Jonathan Huang' 'Carlos Guestrin']"
] |
cs.GT cs.LG stat.ML | null | 1006.1746 | null | null | http://arxiv.org/pdf/1006.1746v1 | 2010-06-09T09:02:44Z | 2010-06-09T09:02:44Z | Calibration and Internal no-Regret with Partial Monitoring | Calibrated strategies can be obtained by performing strategies that have no
internal regret in some auxiliary game. Such strategies can be constructed
explicitly with the use of Blackwell's approachability theorem, in an other
auxiliary game. We establish the converse: a strategy that approaches a convex
$B$-set can be derived from the construction of a calibrated strategy. We
develop these tools in the framework of a game with partial monitoring, where
players do not observe the actions of their opponents but receive random
signals, to define a notion of internal regret and construct strategies that
have no such regret.
| [
"Vianney Perchet (EC)",
"['Vianney Perchet']"
] |
cs.LG | null | 1006.2156 | null | null | http://arxiv.org/pdf/1006.2156v1 | 2010-06-10T21:19:28Z | 2010-06-10T21:19:28Z | Dyadic Prediction Using a Latent Feature Log-Linear Model | In dyadic prediction, labels must be predicted for pairs (dyads) whose
members possess unique identifiers and, sometimes, additional features called
side-information. Special cases of this problem include collaborative filtering
and link prediction. We present the first model for dyadic prediction that
satisfies several important desiderata: (i) labels may be ordinal or nominal,
(ii) side-information can be easily exploited if present, (iii) with or without
side-information, latent features are inferred for dyad members, (iv) it is
resistant to sample-selection bias, (v) it can learn well-calibrated
probabilities, and (vi) it can scale to very large datasets. To our knowledge,
no existing method satisfies all the above criteria. In particular, many
methods assume that the labels are ordinal and ignore side-information when it
is present. Experimental results show that the new method is competitive with
state-of-the-art methods for the special cases of collaborative filtering and
link prediction, and that it makes accurate predictions on nominal data.
| [
"['Aditya Krishna Menon' 'Charles Elkan']",
"Aditya Krishna Menon and Charles Elkan"
] |
cs.IT cs.LG math.IT | 10.1109/TSP.2011.2171953 | 1006.2513 | null | null | http://arxiv.org/abs/1006.2513v3 | 2013-08-25T17:12:28Z | 2010-06-13T06:07:09Z | On the Achievability of Cram\'er-Rao Bound In Noisy Compressed Sensing | Recently, it has been proved in Babadi et al. that in noisy compressed
sensing, a joint typical estimator can asymptotically achieve the Cramer-Rao
lower bound of the problem.To prove this result, this paper used a lemma,which
is provided in Akcakaya et al,that comprises the main building block of the
proof. This lemma is based on the assumption of Gaussianity of the measurement
matrix and its randomness in the domain of noise. In this correspondence, we
generalize the results obtained in Babadi et al by dropping the Gaussianity
assumption on the measurement matrix. In fact, by considering the measurement
matrix as a deterministic matrix in our analysis, we find a theorem similar to
the main theorem of Babadi et al for a family of randomly generated (but
deterministic in the noise domain) measurement matrices that satisfy a
generalized condition known as The Concentration of Measures Inequality. By
this, we finally show that under our generalized assumptions, the Cramer-Rao
bound of the estimation is achievable by using the typical estimator introduced
in Babadi et al.
| [
"Rad Niazadeh, Masoud Babaie-Zadeh and Christian Jutten",
"['Rad Niazadeh' 'Masoud Babaie-Zadeh' 'Christian Jutten']"
] |
cs.LG | null | 1006.2588 | null | null | http://arxiv.org/pdf/1006.2588v1 | 2010-06-14T02:03:12Z | 2010-06-14T02:03:12Z | Agnostic Active Learning Without Constraints | We present and analyze an agnostic active learning algorithm that works
without keeping a version space. This is unlike all previous approaches where a
restricted set of candidate hypotheses is maintained throughout learning, and
only hypotheses from this set are ever returned. By avoiding this version space
approach, our algorithm sheds the computational burden and brittleness
associated with maintaining version spaces, yet still allows for substantial
improvements over supervised learning for classification.
| [
"Alina Beygelzimer, Daniel Hsu, John Langford, Tong Zhang",
"['Alina Beygelzimer' 'Daniel Hsu' 'John Langford' 'Tong Zhang']"
] |
stat.ME cs.LG stat.CO | null | 1006.2592 | null | null | http://arxiv.org/pdf/1006.2592v3 | 2011-10-17T02:23:15Z | 2010-06-14T02:51:41Z | Outlier Detection Using Nonconvex Penalized Regression | This paper studies the outlier detection problem from the point of view of
penalized regressions. Our regression model adds one mean shift parameter for
each of the $n$ data points. We then apply a regularization favoring a sparse
vector of mean shift parameters. The usual $L_1$ penalty yields a convex
criterion, but we find that it fails to deliver a robust estimator. The $L_1$
penalty corresponds to soft thresholding. We introduce a thresholding (denoted
by $\Theta$) based iterative procedure for outlier detection ($\Theta$-IPOD). A
version based on hard thresholding correctly identifies outliers on some hard
test problems. We find that $\Theta$-IPOD is much faster than iteratively
reweighted least squares for large data because each iteration costs at most
$O(np)$ (and sometimes much less) avoiding an $O(np^2)$ least squares estimate.
We describe the connection between $\Theta$-IPOD and $M$-estimators. Our
proposed method has one tuning parameter with which to both identify outliers
and estimate regression coefficients. A data-dependent choice can be made based
on BIC. The tuned $\Theta$-IPOD shows outstanding performance in identifying
outliers in various situations in comparison to other existing approaches. This
methodology extends to high-dimensional modeling with $p\gg n$, if both the
coefficient vector and the outlier pattern are sparse.
| [
"['Yiyuan She' 'Art B. Owen']",
"Yiyuan She and Art B. Owen"
] |
cs.LG cs.AI | null | 1006.2899 | null | null | http://arxiv.org/pdf/1006.2899v2 | 2012-07-09T18:22:27Z | 2010-06-15T06:55:03Z | Approximated Structured Prediction for Learning Large Scale Graphical
Models | This manuscripts contains the proofs for "A Primal-Dual Message-Passing
Algorithm for Approximated Large Scale Structured Prediction".
| [
"Tamir Hazan, Raquel Urtasun",
"['Tamir Hazan' 'Raquel Urtasun']"
] |
cs.LG | 10.1109/TSP.2010.2096420 | 1006.3033 | null | null | http://arxiv.org/abs/1006.3033v3 | 2010-11-27T09:06:13Z | 2010-06-15T17:09:01Z | Extension of Wirtinger's Calculus to Reproducing Kernel Hilbert Spaces
and the Complex Kernel LMS | Over the last decade, kernel methods for nonlinear processing have
successfully been used in the machine learning community. The primary
mathematical tool employed in these methods is the notion of the Reproducing
Kernel Hilbert Space. However, so far, the emphasis has been on batch
techniques. It is only recently, that online techniques have been considered in
the context of adaptive signal processing tasks. Moreover, these efforts have
only been focussed on real valued data sequences. To the best of our knowledge,
no adaptive kernel-based strategy has been developed, so far, for complex
valued signals. Furthermore, although the real reproducing kernels are used in
an increasing number of machine learning problems, complex kernels have not,
yet, been used, in spite of their potential interest in applications that deal
with complex signals, with Communications being a typical example. In this
paper, we present a general framework to attack the problem of adaptive
filtering of complex signals, using either real reproducing kernels, taking
advantage of a technique called \textit{complexification} of real RKHSs, or
complex reproducing kernels, highlighting the use of the complex gaussian
kernel. In order to derive gradients of operators that need to be defined on
the associated complex RKHSs, we employ the powerful tool of Wirtinger's
Calculus, which has recently attracted attention in the signal processing
community. To this end, in this paper, the notion of Wirtinger's calculus is
extended, for the first time, to include complex RKHSs and use it to derive
several realizations of the Complex Kernel Least-Mean-Square (CKLMS) algorithm.
Experiments verify that the CKLMS offers significant performance improvements
over several linear and nonlinear algorithms, when dealing with nonlinearities.
| [
"Pantelis Bouboulis and Sergios Theodoridis",
"['Pantelis Bouboulis' 'Sergios Theodoridis']"
] |
cs.GT cs.CR cs.LG | 10.1109/CCA.2010.5611248 | 1006.3417 | null | null | http://arxiv.org/abs/1006.3417v1 | 2010-06-17T10:13:22Z | 2010-06-17T10:13:22Z | Fictitious Play with Time-Invariant Frequency Update for Network
Security | We study two-player security games which can be viewed as sequences of
nonzero-sum matrix games played by an Attacker and a Defender. The evolution of
the game is based on a stochastic fictitious play process, where players do not
have access to each other's payoff matrix. Each has to observe the other's
actions up to present and plays the action generated based on the best response
to these observations. In a regular fictitious play process, each player makes
a maximum likelihood estimate of her opponent's mixed strategy, which results
in a time-varying update based on the previous estimate and current action. In
this paper, we explore an alternative scheme for frequency update, whose mean
dynamic is instead time-invariant. We examine convergence properties of the
mean dynamic of the fictitious play process with such an update scheme, and
establish local stability of the equilibrium point when both players are
restricted to two actions. We also propose an adaptive algorithm based on this
time-invariant frequency update.
| [
"Kien C. Nguyen, Tansu Alpcan, Tamer Ba\\c{s}ar",
"['Kien C. Nguyen' 'Tansu Alpcan' 'Tamer Başar']"
] |
cs.CV cs.IT cs.LG math.IT | null | 1006.3679 | null | null | http://arxiv.org/pdf/1006.3679v1 | 2010-06-18T12:37:28Z | 2010-06-18T12:37:28Z | Segmentation of Natural Images by Texture and Boundary Compression | We present a novel algorithm for segmentation of natural images that
harnesses the principle of minimum description length (MDL). Our method is
based on observations that a homogeneously textured region of a natural image
can be well modeled by a Gaussian distribution and the region boundary can be
effectively coded by an adaptive chain code. The optimal segmentation of an
image is the one that gives the shortest coding length for encoding all
textures and boundaries in the image, and is obtained via an agglomerative
clustering process applied to a hierarchy of decreasing window sizes as
multi-scale texture features. The optimal segmentation also provides an
accurate estimate of the overall coding length and hence the true entropy of
the image. We test our algorithm on the publicly available Berkeley
Segmentation Dataset. It achieves state-of-the-art segmentation results
compared to other existing methods.
| [
"['Hossein Mobahi' 'Shankar R. Rao' 'Allen Y. Yang' 'Shankar S. Sastry'\n 'Yi Ma']",
"Hossein Mobahi, Shankar R. Rao, Allen Y. Yang, Shankar S. Sastry and\n Yi Ma"
] |
cs.IT cs.LG math.IT math.ST stat.TH | null | 1006.3780 | null | null | http://arxiv.org/pdf/1006.3780v1 | 2010-06-18T19:35:52Z | 2010-06-18T19:35:52Z | Least Squares Superposition Codes of Moderate Dictionary Size, Reliable
at Rates up to Capacity | For the additive white Gaussian noise channel with average codeword power
constraint, new coding methods are devised in which the codewords are sparse
superpositions, that is, linear combinations of subsets of vectors from a given
design, with the possible messages indexed by the choice of subset. Decoding is
by least squares, tailored to the assumed form of linear combination.
Communication is shown to be reliable with error probability exponentially
small for all rates up to the Shannon capacity.
| [
"['Andrew R. Barron' 'Antony Joseph']",
"Andrew R. Barron, Antony Joseph"
] |
cs.IT cs.LG math.IT math.ST stat.TH | null | 1006.3870 | null | null | http://arxiv.org/pdf/1006.3870v1 | 2010-06-19T13:51:27Z | 2010-06-19T13:51:27Z | Toward Fast Reliable Communication at Rates Near Capacity with Gaussian
Noise | For the additive Gaussian noise channel with average codeword power
constraint, sparse superposition codes and adaptive successive decoding is
developed. Codewords are linear combinations of subsets of vectors, with the
message indexed by the choice of subset. A feasible decoding algorithm is
presented. Communication is reliable with error probability exponentially small
for all rates below the Shannon capacity.
| [
"Andrew R Barron, Antony Joseph",
"['Andrew R Barron' 'Antony Joseph']"
] |
cs.LG cs.AI | null | 1006.4039 | null | null | http://arxiv.org/pdf/1006.4039v3 | 2011-02-04T16:06:35Z | 2010-06-21T11:30:06Z | Distributed Autonomous Online Learning: Regrets and Intrinsic
Privacy-Preserving Properties | Online learning has become increasingly popular on handling massive data. The
sequential nature of online learning, however, requires a centralized learner
to store data and update parameters. In this paper, we consider online learning
with {\em distributed} data sources. The autonomous learners update local
parameters based on local data sources and periodically exchange information
with a small subset of neighbors in a communication network. We derive the
regret bound for strongly convex functions that generalizes the work by Ram et
al. (2010) for convex functions. Most importantly, we show that our algorithm
has \emph{intrinsic} privacy-preserving properties, and we prove the sufficient
and necessary conditions for privacy preservation in the network. These
conditions imply that for networks with greater-than-one connectivity, a
malicious learner cannot reconstruct the subgradients (and sensitive raw data)
of other learners, which makes our algorithm appealing in privacy sensitive
applications.
| [
"Feng Yan, Shreyas Sundaram, S. V. N. Vishwanathan, Yuan Qi",
"['Feng Yan' 'Shreyas Sundaram' 'S. V. N. Vishwanathan' 'Yuan Qi']"
] |
cs.PL cs.LG cs.LO | 10.1017/S1471068410000566 | 1006.4442 | null | null | http://arxiv.org/abs/1006.4442v1 | 2010-06-23T08:05:34Z | 2010-06-23T08:05:34Z | On the Implementation of the Probabilistic Logic Programming Language
ProbLog | The past few years have seen a surge of interest in the field of
probabilistic logic learning and statistical relational learning. In this
endeavor, many probabilistic logics have been developed. ProbLog is a recent
probabilistic extension of Prolog motivated by the mining of large biological
networks. In ProbLog, facts can be labeled with probabilities. These facts are
treated as mutually independent random variables that indicate whether these
facts belong to a randomly sampled program. Different kinds of queries can be
posed to ProbLog programs. We introduce algorithms that allow the efficient
execution of these queries, discuss their implementation on top of the
YAP-Prolog system, and evaluate their performance in the context of large
networks of biological entities.
| [
"Angelika Kimmig, Bart Demoen, Luc De Raedt, V\\'itor Santos Costa and\n Ricardo Rocha",
"['Angelika Kimmig' 'Bart Demoen' 'Luc De Raedt' 'Vítor Santos Costa'\n 'Ricardo Rocha']"
] |
cs.LG cs.AI cs.NE | null | 1006.4540 | null | null | http://arxiv.org/pdf/1006.4540v1 | 2010-06-23T14:53:33Z | 2010-06-23T14:53:33Z | A Novel Rough Set Reduct Algorithm for Medical Domain Based on Bee
Colony Optimization | Feature selection refers to the problem of selecting relevant features which
produce the most predictive outcome. In particular, feature selection task is
involved in datasets containing huge number of features. Rough set theory has
been one of the most successful methods used for feature selection. However,
this method is still not able to find optimal subsets. This paper proposes a
new feature selection method based on Rough set theory hybrid with Bee Colony
Optimization (BCO) in an attempt to combat this. This proposed work is applied
in the medical domain to find the minimal reducts and experimentally compared
with the Quick Reduct, Entropy Based Reduct, and other hybrid Rough Set methods
such as Genetic Algorithm (GA), Ant Colony Optimization (ACO) and Particle
Swarm Optimization (PSO).
| [
"N. Suguna and K. Thanushkodi",
"['N. Suguna' 'K. Thanushkodi']"
] |
cs.LG | null | 1006.4832 | null | null | http://arxiv.org/pdf/1006.4832v1 | 2010-06-24T16:42:38Z | 2010-06-24T16:42:38Z | MINLIP for the Identification of Monotone Wiener Systems | This paper studies the MINLIP estimator for the identification of Wiener
systems consisting of a sequence of a linear FIR dynamical model, and a
monotonically increasing (or decreasing) static function. Given $T$
observations, this algorithm boils down to solving a convex quadratic program
with $O(T)$ variables and inequality constraints, implementing an inference
technique which is based entirely on model complexity control. The resulting
estimates of the linear submodel are found to be almost consistent when no
noise is present in the data, under a condition of smoothness of the true
nonlinearity and local Persistency of Excitation (local PE) of the data. This
result is novel as it does not rely on classical tools as a 'linearization'
using a Taylor decomposition, nor exploits stochastic properties of the data.
It is indicated how to extend the method to cope with noisy data, and empirical
evidence contrasts performance of the estimator against other recently proposed
techniques.
| [
"['Kristiaan Pelckmans']",
"Kristiaan Pelckmans"
] |
cs.LG cs.DC | null | 1006.4990 | null | null | http://arxiv.org/pdf/1006.4990v1 | 2010-06-25T13:23:48Z | 2010-06-25T13:23:48Z | GraphLab: A New Framework for Parallel Machine Learning | Designing and implementing efficient, provably correct parallel machine
learning (ML) algorithms is challenging. Existing high-level parallel
abstractions like MapReduce are insufficiently expressive while low-level tools
like MPI and Pthreads leave ML experts repeatedly solving the same design
challenges. By targeting common patterns in ML, we developed GraphLab, which
improves upon abstractions like MapReduce by compactly expressing asynchronous
iterative algorithms with sparse computational dependencies while ensuring data
consistency and achieving a high degree of parallel performance. We demonstrate
the expressiveness of the GraphLab framework by designing and implementing
parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and
Compressed Sensing. We show that using GraphLab we can achieve excellent
parallel performance on large scale real-world problems.
| [
"['Yucheng Low' 'Joseph Gonzalez' 'Aapo Kyrola' 'Danny Bickson'\n 'Carlos Guestrin' 'Joseph M. Hellerstein']",
"Yucheng Low and Joseph Gonzalez and Aapo Kyrola and Danny Bickson and\n Carlos Guestrin and Joseph M. Hellerstein"
] |
cs.LG stat.ML | null | 1006.5051 | null | null | http://arxiv.org/pdf/1006.5051v1 | 2010-06-25T19:48:50Z | 2010-06-25T19:48:50Z | Fast ABC-Boost for Multi-Class Classification | Abc-boost is a new line of boosting algorithms for multi-class
classification, by utilizing the commonly used sum-to-zero constraint. To
implement abc-boost, a base class must be identified at each boosting step.
Prior studies used a very expensive procedure based on exhaustive search for
determining the base class at each boosting step. Good testing performances of
abc-boost (implemented as abc-mart and abc-logitboost) on a variety of datasets
were reported.
For large datasets, however, the exhaustive search strategy adopted in prior
abc-boost algorithms can be too prohibitive. To overcome this serious
limitation, this paper suggests a heuristic by introducing Gaps when computing
the base class during training. That is, we update the choice of the base class
only for every $G$ boosting steps (i.e., G=1 in prior studies). We test this
idea on large datasets (Covertype and Poker) as well as datasets of moderate
sizes. Our preliminary results are very encouraging. On the large datasets,
even with G=100 (or larger), there is essentially no loss of test accuracy. On
the moderate datasets, no obvious loss of test accuracy is observed when G<=
20~50. Therefore, aided by this heuristic, it is promising that abc-boost will
be a practical tool for accurate multi-class classification.
| [
"Ping Li",
"['Ping Li']"
] |
stat.ML cs.LG stat.ME | null | 1006.5060 | null | null | http://arxiv.org/pdf/1006.5060v2 | 2010-07-01T05:06:43Z | 2010-06-25T20:27:00Z | Learning sparse gradients for variable selection and dimension reduction | Variable selection and dimension reduction are two commonly adopted
approaches for high-dimensional data analysis, but have traditionally been
treated separately. Here we propose an integrated approach, called sparse
gradient learning (SGL), for variable selection and dimension reduction via
learning the gradients of the prediction function directly from samples. By
imposing a sparsity constraint on the gradients, variable selection is achieved
by selecting variables corresponding to non-zero partial derivatives, and
effective dimensions are extracted based on the eigenvectors of the derived
sparse empirical gradient covariance matrix. An error analysis is given for the
convergence of the estimated gradients to the true ones in both the Euclidean
and the manifold setting. We also develop an efficient forward-backward
splitting algorithm to solve the SGL problem, making the framework practically
scalable for medium or large datasets. The utility of SGL for variable
selection and feature extraction is explicitly given and illustrated on
artificial data as well as real-world examples. The main advantages of our
method include variable selection for both linear and nonlinear predictions,
effective dimension reduction with sparse loadings, and an efficient algorithm
for large p, small n problems.
| [
"Gui-Bo Ye and Xiaohui Xie",
"['Gui-Bo Ye' 'Xiaohui Xie']"
] |
stat.CO cs.LG math.OC | null | 1006.5086 | null | null | http://arxiv.org/pdf/1006.5086v1 | 2010-06-26T00:17:32Z | 2010-06-26T00:17:32Z | Split Bregman method for large scale fused Lasso | rdering of regression or classification coefficients occurs in many
real-world applications. Fused Lasso exploits this ordering by explicitly
regularizing the differences between neighboring coefficients through an
$\ell_1$ norm regularizer. However, due to nonseparability and nonsmoothness of
the regularization term, solving the fused Lasso problem is computationally
demanding. Existing solvers can only deal with problems of small or medium
size, or a special case of the fused Lasso problem in which the predictor
matrix is identity matrix. In this paper, we propose an iterative algorithm
based on split Bregman method to solve a class of large-scale fused Lasso
problems, including a generalized fused Lasso and a fused Lasso support vector
classifier. We derive our algorithm using augmented Lagrangian method and prove
its convergence properties. The performance of our method is tested on both
artificial data and real-world applications including proteomic data from mass
spectrometry and genomic data from array CGH. We demonstrate that our method is
many times faster than the existing solvers, and show that it is especially
efficient for large p, small n problems.
| [
"Gui-Bo Ye and Xiaohui Xie",
"['Gui-Bo Ye' 'Xiaohui Xie']"
] |
cs.LG | null | 1006.5090 | null | null | http://arxiv.org/pdf/1006.5090v1 | 2010-06-26T01:44:57Z | 2010-06-26T01:44:57Z | PAC learnability of a concept class under non-atomic measures: a problem
by Vidyasagar | In response to a 1997 problem of M. Vidyasagar, we state a necessary and
sufficient condition for distribution-free PAC learnability of a concept class
$\mathscr C$ under the family of all non-atomic (diffuse) measures on the
domain $\Omega$. Clearly, finiteness of the classical Vapnik-Chervonenkis
dimension of $\mathscr C$ is a sufficient, but no longer necessary, condition.
Besides, learnability of $\mathscr C$ under non-atomic measures does not imply
the uniform Glivenko-Cantelli property with regard to non-atomic measures. Our
learnability criterion is stated in terms of a combinatorial parameter
$\VC({\mathscr C}\,{\mathrm{mod}}\,\omega_1)$ which we call the VC dimension of
$\mathscr C$ modulo countable sets. The new parameter is obtained by
``thickening up'' single points in the definition of VC dimension to
uncountable ``clusters''. Equivalently, $\VC(\mathscr C\modd\omega_1)\leq d$ if
and only if every countable subclass of $\mathscr C$ has VC dimension $\leq d$
outside a countable subset of $\Omega$. The new parameter can be also expressed
as the classical VC dimension of $\mathscr C$ calculated on a suitable subset
of a compactification of $\Omega$. We do not make any measurability assumptions
on $\mathscr C$, assuming instead the validity of Martin's Axiom (MA).
| [
"['Vladimir Pestov']",
"Vladimir Pestov"
] |
cs.AI cs.LG | null | 1006.5188 | null | null | http://arxiv.org/pdf/1006.5188v1 | 2010-06-27T08:56:11Z | 2010-06-27T08:56:11Z | Feature Construction for Relational Sequence Learning | We tackle the problem of multi-class relational sequence learning using
relevant patterns discovered from a set of labelled sequences. To deal with
this problem, firstly each relational sequence is mapped into a feature vector
using the result of a feature construction method. Since, the efficacy of
sequence learning algorithms strongly depends on the features used to represent
the sequences, the second step is to find an optimal subset of the constructed
features leading to high classification accuracy. This feature selection task
has been solved adopting a wrapper approach that uses a stochastic local search
algorithm embedding a naive Bayes classifier. The performance of the proposed
method applied to a real-world dataset shows an improvement when compared to
other established methods, such as hidden Markov models, Fisher kernels and
conditional random fields for relational sequences.
| [
"['Nicola Di Mauro' 'Teresa M. A. Basile' 'Stefano Ferilli'\n 'Floriana Esposito']",
"Nicola Di Mauro and Teresa M.A. Basile and Stefano Ferilli and\n Floriana Esposito"
] |
cs.DB cs.LG | null | 1006.5261 | null | null | http://arxiv.org/pdf/1006.5261v1 | 2010-06-28T04:02:17Z | 2010-06-28T04:02:17Z | Data Stream Clustering: Challenges and Issues | Very large databases are required to store massive amounts of data that are
continuously inserted and queried. Analyzing huge data sets and extracting
valuable pattern in many applications are interesting for researchers. We can
identify two main groups of techniques for huge data bases mining. One group
refers to streaming data and applies mining techniques whereas second group
attempts to solve this problem directly with efficient algorithms. Recently
many researchers have focused on data stream as an efficient strategy against
huge data base mining instead of mining on entire data base. The main problem
in data stream mining means evolving data is more difficult to detect in this
techniques therefore unsupervised methods should be applied. However,
clustering techniques can lead us to discover hidden information. In this
survey, we try to clarify: first, the different problem definitions related to
data stream clustering in general; second, the specific difficulties
encountered in this field of research; third, the varying assumptions,
heuristics, and intuitions forming the basis of different approaches; and how
several prominent solutions tackle different problems. Index Terms- Data
Stream, Clustering, K-Means, Concept drift
| [
"Madjid Khalilian, Norwati Mustapha",
"['Madjid Khalilian' 'Norwati Mustapha']"
] |
cs.IR cs.LG | null | 1006.5278 | null | null | http://arxiv.org/pdf/1006.5278v4 | 2010-12-24T07:22:48Z | 2010-06-28T07:20:28Z | A Survey Paper on Recommender Systems | Recommender systems apply data mining techniques and prediction algorithms to
predict users' interest on information, products and services among the
tremendous amount of available items. The vast growth of information on the
Internet as well as number of visitors to websites add some key challenges to
recommender systems. These are: producing accurate recommendation, handling
many recommendations efficiently and coping with the vast growth of number of
participants in the system. Therefore, new recommender system technologies are
needed that can quickly produce high quality recommendations even for huge data
sets.
To address these issues we have explored several collaborative filtering
techniques such as the item based approach, which identify relationship between
items and indirectly compute recommendations for users based on these
relationships. The user based approach was also studied, it identifies
relationships between users of similar tastes and computes recommendations
based on these relationships.
In this paper, we introduce the topic of recommender system. It provides ways
to evaluate efficiency, scalability and accuracy of recommender system. The
paper also analyzes different algorithms of user based and item based
techniques for recommendation generation. Moreover, a simple experiment was
conducted using a data mining application -Weka- to apply data mining
algorithms to recommender system. We conclude by proposing our approach that
might enhance the quality of recommender systems.
| [
"['Dhoha Almazro' 'Ghadeer Shahatah' 'Lamia Albdulkarim' 'Mona Kherees'\n 'Romy Martinez' 'William Nzoukou']",
"Dhoha Almazro and Ghadeer Shahatah and Lamia Albdulkarim and Mona\n Kherees and Romy Martinez and William Nzoukou"
] |
cs.LG physics.soc-ph | null | 1006.5367 | null | null | http://arxiv.org/pdf/1006.5367v1 | 2010-06-28T14:40:37Z | 2010-06-28T14:40:37Z | The Link Prediction Problem in Bipartite Networks | We define and study the link prediction problem in bipartite networks,
specializing general link prediction algorithms to the bipartite case. In a
graph, a link prediction function of two vertices denotes the similarity or
proximity of the vertices. Common link prediction functions for general graphs
are defined using paths of length two between two nodes. Since in a bipartite
graph adjacency vertices can only be connected by paths of odd lengths, these
functions do not apply to bipartite graphs. Instead, a certain class of graph
kernels (spectral transformation kernels) can be generalized to bipartite
graphs when the positive-semidefinite kernel constraint is relaxed. This
generalization is realized by the odd component of the underlying spectral
transformation. This construction leads to several new link prediction
pseudokernels such as the matrix hyperbolic sine, which we examine for rating
graphs, authorship graphs, folksonomies, document--feature networks and other
types of bipartite networks.
| [
"['Jérôme Kunegis' 'Ernesto W. De Luca' 'Sahin Albayrak']",
"J\\'er\\^ome Kunegis and Ernesto W. De Luca and Sahin Albayrak"
] |
math.ST cs.LG math.PR stat.TH | null | 1007.0296 | null | null | http://arxiv.org/pdf/1007.0296v2 | 2012-02-15T21:56:08Z | 2010-07-02T05:10:49Z | A Bayesian View of the Poisson-Dirichlet Process | The two parameter Poisson-Dirichlet Process (PDP), a generalisation of the
Dirichlet Process, is increasingly being used for probabilistic modelling in
discrete areas such as language technology, bioinformatics, and image analysis.
There is a rich literature about the PDP and its derivative distributions such
as the Chinese Restaurant Process (CRP). This article reviews some of the basic
theory and then the major results needed for Bayesian modelling of discrete
problems including details of priors, posteriors and computation.
The PDP allows one to build distributions over countable partitions. The PDP
has two other remarkable properties: first it is partially conjugate to itself,
which allows one to build hierarchies of PDPs, and second using a marginalised
relative the CRP, one gets fragmentation and clustering properties that lets
one layer partitions to build trees. This article presents the basic theory for
understanding the notion of partitions and distributions over them, the PDP and
the CRP, and the important properties of conjugacy, fragmentation and
clustering, as well as some key related properties such as consistency and
convergence. This article also presents a Bayesian interpretation of the
Poisson-Dirichlet process based on an improper and infinite dimensional
Dirichlet distribution. This means we can understand the process as just
another Dirichlet and thus all its sampling properties emerge naturally.
The theory of PDPs is usually presented for continuous distributions (more
generally referred to as non-atomic distributions), however, when applied to
discrete distributions its remarkable conjugacy property emerges. This context
and basic results are also presented, as well as techniques for computing the
second order Stirling numbers that occur in the posteriors for discrete
distributions.
| [
"Wray Buntine and Marcus Hutter",
"['Wray Buntine' 'Marcus Hutter']"
] |
cs.NA cs.LG | null | 1007.0380 | null | null | http://arxiv.org/pdf/1007.0380v1 | 2010-07-01T17:40:01Z | 2010-07-01T17:40:01Z | Additive Non-negative Matrix Factorization for Missing Data | Non-negative matrix factorization (NMF) has previously been shown to be a
useful decomposition for multivariate data. We interpret the factorization in a
new way and use it to generate missing attributes from test data. We provide a
joint optimization scheme for the missing attributes as well as the NMF
factors. We prove the monotonic convergence of our algorithms. We present
classification results for cases with missing attributes.
| [
"['Mithun Das Gupta']",
"Mithun Das Gupta"
] |
cs.IT cs.LG math.IT | null | 1007.0481 | null | null | http://arxiv.org/pdf/1007.0481v1 | 2010-07-03T08:36:57Z | 2010-07-03T08:36:57Z | IMP: A Message-Passing Algorithmfor Matrix Completion | A new message-passing (MP) method is considered for the matrix completion
problem associated with recommender systems. We attack the problem using a
(generative) factor graph model that is related to a probabilistic low-rank
matrix factorization. Based on the model, we propose a new algorithm, termed
IMP, for the recovery of a data matrix from incomplete observations. The
algorithm is based on a clustering followed by inference via MP (IMP). The
algorithm is compared with a number of other matrix completion algorithms on
real collaborative filtering (e.g., Netflix) data matrices. Our results show
that, while many methods perform similarly with a large number of revealed
entries, the IMP algorithm outperforms all others when the fraction of observed
entries is small. This is helpful because it reduces the well-known cold-start
problem associated with collaborative filtering (CF) systems in practice.
| [
"['Byung-Hak Kim' 'Arvind Yedla' 'Henry D. Pfister']",
"Byung-Hak Kim, Arvind Yedla, and Henry D. Pfister"
] |
cs.LG cs.CR cs.GT | null | 1007.0484 | null | null | http://arxiv.org/pdf/1007.0484v1 | 2010-07-03T09:04:44Z | 2010-07-03T09:04:44Z | Query Strategies for Evading Convex-Inducing Classifiers | Classifiers are often used to detect miscreant activities. We study how an
adversary can systematically query a classifier to elicit information that
allows the adversary to evade detection while incurring a near-minimal cost of
modifying their intended malfeasance. We generalize the theory of Lowd and Meek
(2005) to the family of convex-inducing classifiers that partition input space
into two sets one of which is convex. We present query algorithms for this
family that construct undetected instances of approximately minimal cost using
only polynomially-many queries in the dimension of the space and in the level
of approximation. Our results demonstrate that near-optimal evasion can be
accomplished without reverse-engineering the classifier's decision boundary. We
also consider general lp costs and show that near-optimal evasion on the family
of convex-inducing classifiers is generally efficient for both positive and
negative convexity for all levels of approximation if p=1.
| [
"['Blaine Nelson' 'Benjamin I. P. Rubinstein' 'Ling Huang'\n 'Anthony D. Joseph' 'Steven J. Lee' 'Satish Rao' 'J. D. Tygar']",
"Blaine Nelson and Benjamin I. P. Rubinstein and Ling Huang and Anthony\n D. Joseph and Steven J. Lee and Satish Rao and J. D. Tygar"
] |
cs.AI cs.LG cs.NE math.OC | null | 1007.0546 | null | null | http://arxiv.org/pdf/1007.0546v4 | 2013-07-13T22:59:26Z | 2010-07-04T12:18:56Z | Computational Model of Music Sight Reading: A Reinforcement Learning
Approach | Although the Music Sight Reading process has been studied from the cognitive
psychology view points, but the computational learning methods like the
Reinforcement Learning have not yet been used to modeling of such processes. In
this paper, with regards to essential properties of our specific problem, we
consider the value function concept and will indicate that the optimum policy
can be obtained by the method we offer without to be getting involved with
computing of the complex value functions. Also, we will offer a normative
behavioral model for the interaction of the agent with the musical pitch
environment and by using a slightly different version of Partially observable
Markov decision processes we will show that our method helps for faster
learning of state-action pairs in our implemented agents.
| [
"Keyvan Yahya, Pouyan Rafiei Fard",
"['Keyvan Yahya' 'Pouyan Rafiei Fard']"
] |
cs.LG cs.NE | null | 1007.0548 | null | null | http://arxiv.org/pdf/1007.0548v3 | 2011-11-18T20:11:34Z | 2010-07-04T12:37:13Z | A Reinforcement Learning Model Using Neural Networks for Music Sight
Reading Learning Problem | Music Sight Reading is a complex process in which when it is occurred in the
brain some learning attributes would be emerged. Besides giving a model based
on actor-critic method in the Reinforcement Learning, the agent is considered
to have a neural network structure. We studied on where the sight reading
process is happened and also a serious problem which is how the synaptic
weights would be adjusted through the learning process. The model we offer here
is a computational model on which an updated weights equation to fix the
weights is accompanied too.
| [
"Keyvan Yahya, Pouyan Rafiei Fard",
"['Keyvan Yahya' 'Pouyan Rafiei Fard']"
] |
stat.ML cs.LG math.ST stat.TH | null | 1007.0549 | null | null | http://arxiv.org/pdf/1007.0549v3 | 2011-09-28T18:14:13Z | 2010-07-04T13:11:40Z | Minimax Manifold Estimation | We find the minimax rate of convergence in Hausdorff distance for estimating
a manifold M of dimension d embedded in R^D given a noisy sample from the
manifold. We assume that the manifold satisfies a smoothness condition and that
the noise distribution has compact support. We show that the optimal rate of
convergence is n^{-2/(2+d)}. Thus, the minimax rate depends only on the
dimension of the manifold, not on the dimension of the space in which M is
embedded.
| [
"Christopher Genovese, Marco Perone-Pacifico, Isabella Verdinelli and\n Larry Wasserman",
"['Christopher Genovese' 'Marco Perone-Pacifico' 'Isabella Verdinelli'\n 'Larry Wasserman']"
] |
cs.LG | null | 1007.0660 | null | null | http://arxiv.org/pdf/1007.0660v1 | 2010-07-05T11:46:35Z | 2010-07-05T11:46:35Z | The Latent Bernoulli-Gauss Model for Data Analysis | We present a new latent-variable model employing a Gaussian mixture
integrated with a feature selection procedure (the Bernoulli part of the model)
which together form a "Latent Bernoulli-Gauss" distribution. The model is
applied to MAP estimation, clustering, feature selection and collaborative
filtering and fares favorably with the state-of-the-art latent-variable models.
| [
"Amnon Shashua, Gabi Pragier",
"['Amnon Shashua' 'Gabi Pragier']"
] |
cs.LG | null | 1007.0824 | null | null | http://arxiv.org/pdf/1007.0824v1 | 2010-07-06T07:47:00Z | 2010-07-06T07:47:00Z | Filtrage vaste marge pour l'\'etiquetage s\'equentiel \`a noyaux de
signaux | We address in this paper the problem of multi-channel signal sequence
labeling. In particular, we consider the problem where the signals are
contaminated by noise or may present some dephasing with respect to their
labels. For that, we propose to jointly learn a SVM sample classifier with a
temporal filtering of the channels. This will lead to a large margin filtering
that is adapted to the specificity of each channel (noise and time-lag). We
derive algorithms to solve the optimization problem and we discuss different
filter regularizations for automated scaling or selection of channels. Our
approach is tested on a non-linear toy example and on a BCI dataset. Results
show that the classification performance on these problems can be improved by
learning a large margin filtering.
| [
"['Rémi Flamary' 'Benjamin Labbé' 'Alain Rakotomamonjy']",
"R\\'emi Flamary (LITIS), Benjamin Labb\\'e (LITIS), Alain Rakotomamonjy\n (LITIS)"
] |
cs.LG | 10.1109/SBRN.2010.10 | 1007.1282 | null | null | http://arxiv.org/abs/1007.1282v1 | 2010-07-08T03:58:25Z | 2010-07-08T03:58:25Z | A note on sample complexity of learning binary output neural networks
under fixed input distributions | We show that the learning sample complexity of a sigmoidal neural network
constructed by Sontag (1992) required to achieve a given misclassification
error under a fixed purely atomic distribution can grow arbitrarily fast: for
any prescribed rate of growth there is an input distribution having this rate
as the sample complexity, and the bound is asymptotically tight. The rate can
be superexponential, a non-recursive function, etc. We further observe that
Sontag's ANN is not Glivenko-Cantelli under any input distribution having a
non-atomic part.
| [
"['Vladimir Pestov']",
"Vladimir Pestov"
] |
cs.LG | null | 1007.2049 | null | null | http://arxiv.org/pdf/1007.2049v1 | 2010-07-13T08:48:18Z | 2010-07-13T08:48:18Z | Reinforcement Learning via AIXI Approximation | This paper introduces a principled approach for the design of a scalable
general reinforcement learning agent. This approach is based on a direct
approximation of AIXI, a Bayesian optimality notion for general reinforcement
learning agents. Previously, it has been unclear whether the theory of AIXI
could motivate the design of practical algorithms. We answer this hitherto open
question in the affirmative, by providing the first computationally feasible
approximation to the AIXI agent. To develop our approximation, we introduce a
Monte Carlo Tree Search algorithm along with an agent-specific extension of the
Context Tree Weighting algorithm. Empirically, we present a set of encouraging
results on a number of stochastic, unknown, and partially observable domains.
| [
"Joel Veness, Kee Siong Ng, Marcus Hutter and David Silver",
"['Joel Veness' 'Kee Siong Ng' 'Marcus Hutter' 'David Silver']"
] |
cs.LG cs.IT math.IT | null | 1007.2075 | null | null | http://arxiv.org/pdf/1007.2075v1 | 2010-07-13T10:54:14Z | 2010-07-13T10:54:14Z | Consistency of Feature Markov Processes | We are studying long term sequence prediction (forecasting). We approach this
by investigating criteria for choosing a compact useful state representation.
The state is supposed to summarize useful information from the history. We want
a method that is asymptotically consistent in the sense it will provably
eventually only choose between alternatives that satisfy an optimality property
related to the used criterion. We extend our work to the case where there is
side information that one can take advantage of and, furthermore, we briefly
discuss the active setting where an agent takes actions to achieve desirable
outcomes.
| [
"Peter Sunehag and Marcus Hutter",
"['Peter Sunehag' 'Marcus Hutter']"
] |
math.OC cs.LG | null | 1007.2238 | null | null | null | null | null | Online Algorithms for the Multi-Armed Bandit Problem with Markovian
Rewards | We consider the classical multi-armed bandit problem with Markovian rewards.
When played an arm changes its state in a Markovian fashion while it remains
frozen when not played. The player receives a state-dependent reward each time
it plays an arm. The number of states and the state transition probabilities of
an arm are unknown to the player. The player's objective is to maximize its
long-term total reward by learning the best arm over time. We show that under
certain conditions on the state transition probabilities of the arms, a sample
mean based index policy achieves logarithmic regret uniformly over the total
number of trials. The result shows that sample mean based index policies can be
applied to learning problems under the rested Markovian bandit model without
loss of optimality in the order. Moreover, comparision between Anantharam's
index policy and UCB shows that by choosing a small exploration parameter UCB
can have a smaller regret than Anantharam's index policy.
| [
"Cem Tekin, Mingyan Liu"
] |
cs.LG cs.AI | null | 1007.2449 | null | null | http://arxiv.org/pdf/1007.2449v1 | 2010-07-14T22:41:30Z | 2010-07-14T22:41:30Z | A Brief Introduction to Temporality and Causality | Causality is a non-obvious concept that is often considered to be related to
temporality. In this paper we present a number of past and present approaches
to the definition of temporality and causality from philosophical, physical,
and computational points of view. We note that time is an important ingredient
in many relationships and phenomena. The topic is then divided into the two
main areas of temporal discovery, which is concerned with finding relations
that are stretched over time, and causal discovery, where a claim is made as to
the causal influence of certain events on others. We present a number of
computational tools used for attempting to automatically discover temporal and
causal relations in data.
| [
"Kamran Karimi",
"['Kamran Karimi']"
] |
cs.CV cs.LG | null | 1007.2958 | null | null | http://arxiv.org/pdf/1007.2958v1 | 2010-07-17T19:59:11Z | 2010-07-17T19:59:11Z | A Machine Learning Approach to Recovery of Scene Geometry from Images | Recovering the 3D structure of the scene from images yields useful
information for tasks such as shape and scene recognition, object detection, or
motion planning and object grasping in robotics. In this thesis, we introduce a
general machine learning approach called unsupervised CRF learning based on
maximizing the conditional likelihood. We apply our approach to computer vision
systems that recover the 3-D scene geometry from images. We focus on recovering
3D geometry from single images, stereo pairs and video sequences. Building
these systems requires algorithms for doing inference as well as learning the
parameters of conditional Markov random fields (MRF). Our system is trained
unsupervisedly without using ground-truth labeled data. We employ a
slanted-plane stereo vision model in which we use a fixed over-segmentation to
segment the left image into coherent regions called superpixels, then assign a
disparity plane for each superpixel. Plane parameters are estimated by solving
an MRF labelling problem, through minimizing an energy fuction. We demonstrate
the use of our unsupervised CRF learning algorithm for a parameterized
slanted-plane stereo vision model involving shape from texture cues. Our stereo
model with texture cues, only by unsupervised training, outperforms the results
in related work on the same stereo dataset. In this thesis, we also formulate
structure and motion estimation as an energy minimization problem, in which the
model is an extension of our slanted-plane stereo vision model that also
handles surface velocity. Velocity estimation is achieved by solving an MRF
labeling problem using Loopy BP. Performance analysis is done using our novel
evaluation metrics based on the notion of view prediction error. Experiments on
road-driving stereo sequences show encouraging results.
| [
"['Hoang Trinh']",
"Hoang Trinh"
] |
cs.LG stat.ML | 10.1007/s10618-010-0182-x | 1007.3564 | null | null | http://arxiv.org/abs/1007.3564v3 | 2010-07-27T03:01:09Z | 2010-07-21T05:50:47Z | Manifold Elastic Net: A Unified Framework for Sparse Dimension Reduction | It is difficult to find the optimal sparse solution of a manifold learning
based dimensionality reduction algorithm. The lasso or the elastic net
penalized manifold learning based dimensionality reduction is not directly a
lasso penalized least square problem and thus the least angle regression (LARS)
(Efron et al. \cite{LARS}), one of the most popular algorithms in sparse
learning, cannot be applied. Therefore, most current approaches take indirect
ways or have strict settings, which can be inconvenient for applications. In
this paper, we proposed the manifold elastic net or MEN for short. MEN
incorporates the merits of both the manifold learning based dimensionality
reduction and the sparse learning based dimensionality reduction. By using a
series of equivalent transformations, we show MEN is equivalent to the lasso
penalized least square problem and thus LARS is adopted to obtain the optimal
sparse solution of MEN. In particular, MEN has the following advantages for
subsequent classification: 1) the local geometry of samples is well preserved
for low dimensional data representation, 2) both the margin maximization and
the classification error minimization are considered for sparse projection
calculation, 3) the projection matrix of MEN improves the parsimony in
computation, 4) the elastic net penalty reduces the over-fitting problem, and
5) the projection matrix of MEN can be interpreted psychologically and
physiologically. Experimental evidence on face recognition over various popular
datasets suggests that MEN is superior to top level dimensionality reduction
algorithms.
| [
"Tianyi Zhou, Dacheng Tao, Xindong Wu",
"['Tianyi Zhou' 'Dacheng Tao' 'Xindong Wu']"
] |
stat.ML cs.LG stat.CO | null | 1007.3622 | null | null | http://arxiv.org/pdf/1007.3622v4 | 2013-04-16T11:58:17Z | 2010-07-21T11:44:30Z | A generalized risk approach to path inference based on hidden Markov
models | Motivated by the unceasing interest in hidden Markov models (HMMs), this
paper re-examines hidden path inference in these models, using primarily a
risk-based framework. While the most common maximum a posteriori (MAP), or
Viterbi, path estimator and the minimum error, or Posterior Decoder (PD), have
long been around, other path estimators, or decoders, have been either only
hinted at or applied more recently and in dedicated applications generally
unfamiliar to the statistical learning community. Over a decade ago, however, a
family of algorithmically defined decoders aiming to hybridize the two standard
ones was proposed (Brushe et al., 1998). The present paper gives a careful
analysis of this hybridization approach, identifies several problems and issues
with it and other previously proposed approaches, and proposes practical
resolutions of those. Furthermore, simple modifications of the classical
criteria for hidden path recognition are shown to lead to a new class of
decoders. Dynamic programming algorithms to compute these decoders in the usual
forward-backward manner are presented. A particularly interesting subclass of
such estimators can be also viewed as hybrids of the MAP and PD estimators.
Similar to previously proposed MAP-PD hybrids, the new class is parameterized
by a small number of tunable parameters. Unlike their algorithmic predecessors,
the new risk-based decoders are more clearly interpretable, and, most
importantly, work "out of the box" in practice, which is demonstrated on some
real bioinformatics tasks and data. Some further generalizations and
applications are discussed in conclusion.
| [
"['Jüri Lember' 'Alexey A. Koloydenko']",
"J\\\"uri Lember and Alexey A. Koloydenko"
] |
cs.LG | null | 1007.3799 | null | null | http://arxiv.org/pdf/1007.3799v1 | 2010-07-22T04:58:24Z | 2010-07-22T04:58:24Z | Adapting to the Shifting Intent of Search Queries | Search engines today present results that are often oblivious to abrupt
shifts in intent. For example, the query `independence day' usually refers to a
US holiday, but the intent of this query abruptly changed during the release of
a major film by that name. While no studies exactly quantify the magnitude of
intent-shifting traffic, studies suggest that news events, seasonal topics, pop
culture, etc account for 50% of all search queries. This paper shows that the
signals a search engine receives can be used to both determine that a shift in
intent has happened, as well as find a result that is now more relevant. We
present a meta-algorithm that marries a classifier with a bandit algorithm to
achieve regret that depends logarithmically on the number of query impressions,
under certain assumptions. We provide strong evidence that this regret is close
to the best achievable. Finally, via a series of experiments, we demonstrate
that our algorithm outperforms prior approaches, particularly as the amount of
intent-shifting traffic increases.
| [
"['Umar Syed' 'Aleksandrs Slivkins' 'Nina Mishra']",
"Umar Syed and Aleksandrs Slivkins and Nina Mishra"
] |
cs.PL cs.AI cs.LG cs.LO | 10.1017/S1471068410000207 | 1007.3858 | null | null | http://arxiv.org/abs/1007.3858v1 | 2010-07-22T11:32:21Z | 2010-07-22T11:32:21Z | CHR(PRISM)-based Probabilistic Logic Learning | PRISM is an extension of Prolog with probabilistic predicates and built-in
support for expectation-maximization learning. Constraint Handling Rules (CHR)
is a high-level programming language based on multi-headed multiset rewrite
rules.
In this paper, we introduce a new probabilistic logic formalism, called
CHRiSM, based on a combination of CHR and PRISM. It can be used for high-level
rapid prototyping of complex statistical models by means of "chance rules". The
underlying PRISM system can then be used for several probabilistic inference
tasks, including probability computation and parameter learning. We define the
CHRiSM language in terms of syntax and operational semantics, and illustrate it
with examples. We define the notion of ambiguous programs and define a
distribution semantics for unambiguous programs. Next, we describe an
implementation of CHRiSM, based on CHR(PRISM). We discuss the relation between
CHRiSM and other probabilistic logic programming languages, in particular PCHR.
Finally we identify potential application domains.
| [
"Jon Sneyers, Wannes Meert, Joost Vennekens, Yoshitaka Kameya and\n Taisuke Sato",
"['Jon Sneyers' 'Wannes Meert' 'Joost Vennekens' 'Yoshitaka Kameya'\n 'Taisuke Sato']"
] |
cs.LG | 10.5121/ijaia.2010.1303 | 1007.5133 | null | null | http://arxiv.org/abs/1007.5133v1 | 2010-07-29T07:36:49Z | 2010-07-29T07:36:49Z | Comparison of Support Vector Machine and Back Propagation Neural Network
in Evaluating the Enterprise Financial Distress | Recently, applying the novel data mining techniques for evaluating enterprise
financial distress has received much research alternation. Support Vector
Machine (SVM) and back propagation neural (BPN) network has been applied
successfully in many areas with excellent generalization results, such as rule
extraction, classification and evaluation. In this paper, a model based on SVM
with Gaussian RBF kernel is proposed here for enterprise financial distress
evaluation. BPN network is considered one of the simplest and are most general
methods used for supervised training of multilayered neural network. The
comparative results show that through the difference between the performance
measures is marginal; SVM gives higher precision and lower error rates.
| [
"Ming-Chang Lee (1) and Chang To (2) ((1) Fooyin University, Taiwan and\n (2) Shu-Te University, Taiwan)",
"['Ming-Chang Lee' 'Chang To']"
] |
cs.LG | null | 1008.0336 | null | null | http://arxiv.org/pdf/1008.0336v1 | 2010-08-02T16:30:02Z | 2010-08-02T16:30:02Z | Close Clustering Based Automated Color Image Annotation | Most image-search approaches today are based on the text based tags
associated with the images which are mostly human generated and are subject to
various kinds of errors. The results of a query to the image database thus can
often be misleading and may not satisfy the requirements of the user. In this
work we propose our approach to automate this tagging process of images, where
image results generated can be fine filtered based on a probabilistic tagging
mechanism. We implement a tool which helps to automate the tagging process by
maintaining a training database, wherein the system is trained to identify
certain set of input images, the results generated from which are used to
create a probabilistic tagging mechanism. Given a certain set of segments in an
image it calculates the probability of presence of particular keywords. This
probability table is further used to generate the candidate tags for input
images.
| [
"Ankit Garg, Rahul Dwivedi, Krishna Asawa",
"['Ankit Garg' 'Rahul Dwivedi' 'Krishna Asawa']"
] |
cs.LG | null | 1008.0528 | null | null | http://arxiv.org/pdf/1008.0528v1 | 2010-08-03T12:10:40Z | 2010-08-03T12:10:40Z | Bounded Coordinate-Descent for Biological Sequence Classification in
High Dimensional Predictor Space | We present a framework for discriminative sequence classification where the
learner works directly in the high dimensional predictor space of all
subsequences in the training set. This is possible by employing a new
coordinate-descent algorithm coupled with bounding the magnitude of the
gradient for selecting discriminative subsequences fast. We characterize the
loss functions for which our generic learning algorithm can be applied and
present concrete implementations for logistic regression (binomial
log-likelihood loss) and support vector machines (squared hinge loss).
Application of our algorithm to protein remote homology detection and remote
fold recognition results in performance comparable to that of state-of-the-art
methods (e.g., kernel support vector machines). Unlike state-of-the-art
classifiers, the resulting classification models are simply lists of weighted
discriminative subsequences and can thus be interpreted and related to the
biological problem.
| [
"Georgiana Ifrim and Carsten Wiuf",
"['Georgiana Ifrim' 'Carsten Wiuf']"
] |
cs.LG | null | 1008.1398 | null | null | http://arxiv.org/pdf/1008.1398v1 | 2010-08-08T11:25:12Z | 2010-08-08T11:25:12Z | Semi-Supervised Kernel PCA | We present three generalisations of Kernel Principal Components Analysis
(KPCA) which incorporate knowledge of the class labels of a subset of the data
points. The first, MV-KPCA, penalises within class variances similar to Fisher
discriminant analysis. The second, LSKPCA is a hybrid of least squares
regression and kernel PCA. The final LR-KPCA is an iteratively reweighted
version of the previous which achieves a sigmoid loss function on the labeled
points. We provide a theoretical risk bound as well as illustrative experiments
on real and toy data sets.
| [
"['Christian Walder' 'Ricardo Henao' 'Morten Mørup' 'Lars Kai Hansen']",
"Christian Walder, Ricardo Henao, Morten M{\\o}rup, Lars Kai Hansen"
] |
cs.LG cs.AI | null | 1008.1566 | null | null | http://arxiv.org/pdf/1008.1566v5 | 2012-12-04T09:50:03Z | 2010-08-09T19:02:04Z | Separate Training for Conditional Random Fields Using Co-occurrence Rate
Factorization | The standard training method of Conditional Random Fields (CRFs) is very slow
for large-scale applications. As an alternative, piecewise training divides the
full graph into pieces, trains them independently, and combines the learned
weights at test time. In this paper, we present \emph{separate} training for
undirected models based on the novel Co-occurrence Rate Factorization (CR-F).
Separate training is a local training method. In contrast to MEMMs, separate
training is unaffected by the label bias problem. Experiments show that
separate training (i) is unaffected by the label bias problem; (ii) reduces the
training time from weeks to seconds; and (iii) obtains competitive results to
the standard and piecewise training on linear-chain CRFs.
| [
"Zhemin Zhu, Djoerd Hiemstra, Peter Apers, Andreas Wombacher",
"['Zhemin Zhu' 'Djoerd Hiemstra' 'Peter Apers' 'Andreas Wombacher']"
] |
cs.AI cs.LG | null | 1008.1643 | null | null | http://arxiv.org/pdf/1008.1643v2 | 2010-12-12T06:13:31Z | 2010-08-10T07:44:08Z | A Learning Algorithm based on High School Teaching Wisdom | A learning algorithm based on primary school teaching and learning is
presented. The methodology is to continuously evaluate a student and to give
them training on the examples for which they repeatedly fail, until, they can
correctly answer all types of questions. This incremental learning procedure
produces better learning curves by demanding the student to optimally dedicate
their learning time on the failed examples. When used in machine learning, the
algorithm is found to train a machine on a data with maximum variance in the
feature space so that the generalization ability of the network improves. The
algorithm has interesting applications in data mining, model evaluations and
rare objects discovery.
| [
"Ninan Sajeeth Philip",
"['Ninan Sajeeth Philip']"
] |
cs.DS cs.DM cs.LG | null | 1008.2159 | null | null | http://arxiv.org/pdf/1008.2159v3 | 2012-08-22T02:04:42Z | 2010-08-12T16:15:47Z | Submodular Functions: Learnability, Structure, and Optimization | Submodular functions are discrete functions that model laws of diminishing
returns and enjoy numerous algorithmic applications. They have been used in
many areas, including combinatorial optimization, machine learning, and
economics. In this work we study submodular functions from a learning theoretic
angle. We provide algorithms for learning submodular functions, as well as
lower bounds on their learnability. In doing so, we uncover several novel
structural results revealing ways in which submodular functions can be both
surprisingly structured and surprisingly unstructured. We provide several
concrete implications of our work in other domains including algorithmic game
theory and combinatorial optimization.
At a technical level, this research combines ideas from many areas, including
learning theory (distributional learning and PAC-style analyses), combinatorics
and optimization (matroids and submodular functions), and pseudorandomness
(lossless expander graphs).
| [
"['Maria-Florina Balcan' 'Nicholas J. A. Harvey']",
"Maria-Florina Balcan and Nicholas J. A. Harvey"
] |
math.NA cs.CC cs.LG stat.ML | null | 1008.3043 | null | null | http://arxiv.org/pdf/1008.3043v2 | 2012-01-17T18:52:44Z | 2010-08-18T08:36:21Z | Learning Functions of Few Arbitrary Linear Parameters in High Dimensions | Let us assume that $f$ is a continuous function defined on the unit ball of
$\mathbb R^d$, of the form $f(x) = g (A x)$, where $A$ is a $k \times d$ matrix
and $g$ is a function of $k$ variables for $k \ll d$. We are given a budget $m
\in \mathbb N$ of possible point evaluations $f(x_i)$, $i=1,...,m$, of $f$,
which we are allowed to query in order to construct a uniform approximating
function. Under certain smoothness and variation assumptions on the function
$g$, and an {\it arbitrary} choice of the matrix $A$, we present in this paper
1. a sampling choice of the points $\{x_i\}$ drawn at random for each
function approximation;
2. algorithms (Algorithm 1 and Algorithm 2) for computing the approximating
function, whose complexity is at most polynomial in the dimension $d$ and in
the number $m$ of points.
Due to the arbitrariness of $A$, the choice of the sampling points will be
according to suitable random distributions and our results hold with
overwhelming probability. Our approach uses tools taken from the {\it
compressed sensing} framework, recent Chernoff bounds for sums of
positive-semidefinite matrices, and classical stability bounds for invariant
subspaces of singular value decompositions.
| [
"Massimo Fornasier, Karin Schnass, Jan Vybiral",
"['Massimo Fornasier' 'Karin Schnass' 'Jan Vybiral']"
] |
cs.DS cs.CC cs.LG | null | 1008.3187 | null | null | http://arxiv.org/pdf/1008.3187v1 | 2010-08-18T23:45:28Z | 2010-08-18T23:45:28Z | Polynomial-Time Approximation Schemes for Knapsack and Related Counting
Problems using Branching Programs | We give a deterministic, polynomial-time algorithm for approximately counting
the number of {0,1}-solutions to any instance of the knapsack problem. On an
instance of length n with total weight W and accuracy parameter eps, our
algorithm produces a (1 + eps)-multiplicative approximation in time poly(n,log
W,1/eps). We also give algorithms with identical guarantees for general integer
knapsack, the multidimensional knapsack problem (with a constant number of
constraints) and for contingency tables (with a constant number of rows).
Previously, only randomized approximation schemes were known for these problems
due to work by Morris and Sinclair and work by Dyer.
Our algorithms work by constructing small-width, read-once branching programs
for approximating the underlying solution space under a carefully chosen
distribution. As a byproduct of this approach, we obtain new query algorithms
for learning functions of k halfspaces with respect to the uniform distribution
on {0,1}^n. The running time of our algorithm is polynomial in the accuracy
parameter eps. Previously even for the case of k=2, only algorithms with an
exponential dependence on eps were known.
| [
"Parikshit Gopalan, Adam Klivans, Raghu Meka",
"['Parikshit Gopalan' 'Adam Klivans' 'Raghu Meka']"
] |
cs.LO cs.LG stat.ML | null | 1008.3585 | null | null | http://arxiv.org/pdf/1008.3585v1 | 2010-08-20T23:07:54Z | 2010-08-20T23:07:54Z | Ultrametric and Generalized Ultrametric in Computational Logic and in
Data Analysis | Following a review of metric, ultrametric and generalized ultrametric, we
review their application in data analysis. We show how they allow us to explore
both geometry and topology of information, starting with measured data. Some
themes are then developed based on the use of metric, ultrametric and
generalized ultrametric in logic. In particular we study approximation chains
in an ultrametric or generalized ultrametric context. Our aim in this work is
to extend the scope of data analysis by facilitating reasoning based on the
data analysis; and to show how quantitative and qualitative data analysis can
be incorporated into logic programming.
| [
"Fionn Murtagh",
"['Fionn Murtagh']"
] |
q-fin.PM cond-mat.stat-mech cs.LG math.OC q-fin.RM | 10.1371/journal.pone.0134968 | 1008.3746 | null | null | http://arxiv.org/abs/1008.3746v2 | 2010-09-09T04:00:01Z | 2010-08-23T04:20:37Z | Belief Propagation Algorithm for Portfolio Optimization Problems | The typical behavior of optimal solutions to portfolio optimization problems
with absolute deviation and expected shortfall models using replica analysis
was pioneeringly estimated by S. Ciliberti and M. M\'ezard [Eur. Phys. B. 57,
175 (2007)]; however, they have not yet developed an approximate derivation
method for finding the optimal portfolio with respect to a given return set. In
this study, an approximation algorithm based on belief propagation for the
portfolio optimization problem is presented using the Bethe free energy
formalism, and the consistency of the numerical experimental results of the
proposed algorithm with those of replica analysis is confirmed. Furthermore,
the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the
absolute deviation model and with the mean-variance model have the same typical
behavior, is verified using replica analysis and the belief propagation
algorithm.
| [
"Takashi Shinzato and Muneki Yasuda",
"['Takashi Shinzato' 'Muneki Yasuda']"
] |
cs.GT cs.AI cs.LG | 10.1007/s10472-013-9358-6 | 1008.3829 | null | null | http://arxiv.org/abs/1008.3829v3 | 2011-11-22T20:58:26Z | 2010-08-23T14:26:46Z | Approximate Judgement Aggregation | In this paper we analyze judgement aggregation problems in which a group of
agents independently votes on a set of complex propositions that has some
interdependency constraint between them(e.g., transitivity when describing
preferences). We consider the issue of judgement aggregation from the
perspective of approximation. That is, we generalize the previous results by
studying approximate judgement aggregation. We relax the main two constraints
assumed in the current literature, Consistency and Independence and consider
mechanisms that only approximately satisfy these constraints, that is, satisfy
them up to a small portion of the inputs. The main question we raise is whether
the relaxation of these notions significantly alters the class of satisfying
aggregation mechanisms. The recent works for preference aggregation of Kalai,
Mossel, and Keller fit into this framework. The main result of this paper is
that, as in the case of preference aggregation, in the case of a subclass of a
natural class of aggregation problems termed `truth-functional agendas', the
set of satisfying aggregation mechanisms does not extend non-trivially when
relaxing the constraints. Our proof techniques involve Boolean Fourier
transform and analysis of voter influences for voting protocols. The question
we raise for Approximate Aggregation can be stated in terms of Property
Testing. For instance, as a corollary from our result we get a generalization
of the classic result for property testing of linearity of Boolean functions.
An updated version (RePEc:huj:dispap:dp574R) is available at
http://www.ratio.huji.ac.il/dp_files/dp574R.pdf
| [
"['Ilan Nehama']",
"Ilan Nehama"
] |
cs.LG stat.ML | null | 1008.4000 | null | null | http://arxiv.org/pdf/1008.4000v1 | 2010-08-24T10:02:01Z | 2010-08-24T10:02:01Z | NESVM: a Fast Gradient Method for Support Vector Machines | Support vector machines (SVMs) are invaluable tools for many practical
applications in artificial intelligence, e.g., classification and event
recognition. However, popular SVM solvers are not sufficiently efficient for
applications with a great deal of samples as well as a large number of
features. In this paper, thus, we present NESVM, a fast gradient SVM solver
that can optimize various SVM models, e.g., classical SVM, linear programming
SVM and least square SVM. Compared against SVM-Perf
\cite{SVM_Perf}\cite{PerfML} (its convergence rate in solving the dual SVM is
upper bounded by $\mathcal O(1/\sqrt{k})$, wherein $k$ is the number of
iterations.) and Pegasos \cite{Pegasos} (online SVM that converges at rate
$\mathcal O(1/k)$ for the primal SVM), NESVM achieves the optimal convergence
rate at $\mathcal O(1/k^{2})$ and a linear time complexity. In particular,
NESVM smoothes the non-differentiable hinge loss and $\ell_1$-norm in the
primal SVM. Then the optimal gradient method without any line search is adopted
to solve the optimization. In each iteration round, the current gradient and
historical gradients are combined to determine the descent direction, while the
Lipschitz constant determines the step size. Only two matrix-vector
multiplications are required in each iteration round. Therefore, NESVM is more
efficient than existing SVM solvers. In addition, NESVM is available for both
linear and nonlinear kernels. We also propose "homotopy NESVM" to accelerate
NESVM by dynamically decreasing the smooth parameter and using the continuation
method. Our experiments on census income categorization, indoor/outdoor scene
classification, event recognition and scene recognition suggest the efficiency
and the effectiveness of NESVM. The MATLAB code of NESVM will be available on
our website for further assessment.
| [
"Tianyi Zhou, Dacheng Tao, Xindong Wu",
"['Tianyi Zhou' 'Dacheng Tao' 'Xindong Wu']"
] |
cs.LG math.OC stat.ML | null | 1008.4220 | null | null | http://arxiv.org/pdf/1008.4220v3 | 2010-11-12T14:51:23Z | 2010-08-25T07:28:08Z | Structured sparsity-inducing norms through submodular functions | Sparse methods for supervised learning aim at finding good linear predictors
from as few variables as possible, i.e., with small cardinality of their
supports. This combinatorial selection problem is often turned into a convex
optimization problem by replacing the cardinality function by its convex
envelope (tightest convex lower bound), in this case the L1-norm. In this
paper, we investigate more general set-functions than the cardinality, that may
incorporate prior knowledge or structural constraints which are common in many
applications: namely, we show that for nondecreasing submodular set-functions,
the corresponding convex envelope can be obtained from its \lova extension, a
common tool in submodular analysis. This defines a family of polyhedral norms,
for which we provide generic algorithmic tools (subgradients and proximal
operators) and theoretical results (conditions for support recovery or
high-dimensional inference). By selecting specific submodular functions, we can
give a new interpretation to known norms, such as those based on
rank-statistics or grouped norms with potentially overlapping groups; we also
define new norms, in particular ones that can be used as non-factorial priors
for supervised learning.
| [
"['Francis Bach']",
"Francis Bach (INRIA Rocquencourt, LIENS)"
] |
cs.LG | null | 1008.4232 | null | null | http://arxiv.org/pdf/1008.4232v1 | 2010-08-25T09:09:29Z | 2010-08-25T09:09:29Z | Online Learning in Case of Unbounded Losses Using the Follow Perturbed
Leader Algorithm | In this paper the sequential prediction problem with expert advice is
considered for the case where losses of experts suffered at each step cannot be
bounded in advance. We present some modification of Kalai and Vempala algorithm
of following the perturbed leader where weights depend on past losses of the
experts. New notions of a volume and a scaled fluctuation of a game are
introduced. We present a probabilistic algorithm protected from unrestrictedly
large one-step losses. This algorithm has the optimal performance in the case
when the scaled fluctuations of one-step losses of experts of the pool tend to
zero.
| [
"[\"Vladimir V. V'yugin\"]",
"Vladimir V. V'yugin"
] |
cs.MM cs.LG cs.SY | null | 1008.4406 | null | null | http://arxiv.org/pdf/1008.4406v1 | 2010-08-25T23:06:39Z | 2010-08-25T23:06:39Z | Structural Solutions to Dynamic Scheduling for Multimedia Transmission
in Unknown Wireless Environments | In this paper, we propose a systematic solution to the problem of scheduling
delay-sensitive media data for transmission over time-varying wireless
channels. We first formulate the dynamic scheduling problem as a Markov
decision process (MDP) that explicitly considers the users' heterogeneous
multimedia data characteristics (e.g. delay deadlines, distortion impacts and
dependencies etc.) and time-varying channel conditions, which are not
simultaneously considered in state-of-the-art packet scheduling algorithms.
This formulation allows us to perform foresighted decisions to schedule
multiple data units for transmission at each time in order to optimize the
long-term utilities of the multimedia applications. The heterogeneity of the
media data enables us to express the transmission priorities between the
different data units as a priority graph, which is a directed acyclic graph
(DAG). This priority graph provides us with an elegant structure to decompose
the multi-data unit foresighted decision at each time into multiple single-data
unit foresighted decisions which can be performed sequentially, from the high
priority data units to the low priority data units, thereby significantly
reducing the computation complexity. When the statistical knowledge of the
multimedia data characteristics and channel conditions is unknown a priori, we
develop a low-complexity online learning algorithm to update the value
functions which capture the impact of the current decision on the future
utility. The simulation results show that the proposed solution significantly
outperforms existing state-of-the-art scheduling solutions.
| [
"Fangwen Fu, and Mihaela van der Schaar",
"['Fangwen Fu' 'Mihaela van der Schaar']"
] |
cs.LG | null | 1008.4532 | null | null | http://arxiv.org/pdf/1008.4532v1 | 2010-08-26T15:36:22Z | 2010-08-26T15:36:22Z | Switching between Hidden Markov Models using Fixed Share | In prediction with expert advice the goal is to design online prediction
algorithms that achieve small regret (additional loss on the whole data)
compared to a reference scheme. In the simplest such scheme one compares to the
loss of the best expert in hindsight. A more ambitious goal is to split the
data into segments and compare to the best expert on each segment. This is
appropriate if the nature of the data changes between segments. The standard
fixed-share algorithm is fast and achieves small regret compared to this
scheme.
Fixed share treats the experts as black boxes: there are no assumptions about
how they generate their predictions. But if the experts are learning, the
following question arises: should the experts learn from all data or only from
data in their own segment? The original algorithm naturally addresses the first
case. Here we consider the second option, which is more appropriate exactly
when the nature of the data changes between segments. In general extending
fixed share to this second case will slow it down by a factor of T on T
outcomes. We show, however, that no such slowdown is necessary if the experts
are hidden Markov models.
| [
"['Wouter M. Koolen' 'Tim van Erven']",
"Wouter M. Koolen and Tim van Erven"
] |
cs.LG | null | 1008.4654 | null | null | http://arxiv.org/pdf/1008.4654v1 | 2010-08-27T06:53:28Z | 2010-08-27T06:53:28Z | Freezing and Sleeping: Tracking Experts that Learn by Evolving Past
Posteriors | A problem posed by Freund is how to efficiently track a small pool of experts
out of a much larger set. This problem was solved when Bousquet and Warmuth
introduced their mixing past posteriors (MPP) algorithm in 2001.
In Freund's problem the experts would normally be considered black boxes.
However, in this paper we re-examine Freund's problem in case the experts have
internal structure that enables them to learn. In this case the problem has two
possible interpretations: should the experts learn from all data or only from
the subsequence on which they are being tracked? The MPP algorithm solves the
first case. Our contribution is to generalise MPP to address the second option.
The results we obtain apply to any expert structure that can be formalised
using (expert) hidden Markov models. Curiously enough, for our interpretation
there are \emph{two} natural reference schemes: freezing and sleeping. For each
scheme, we provide an efficient prediction strategy and prove the relevant loss
bound.
| [
"['Wouter M. Koolen' 'Tim van Erven']",
"Wouter M. Koolen and Tim van Erven"
] |
cs.IR cs.LG | null | 1008.4669 | null | null | http://arxiv.org/pdf/1008.4669v1 | 2010-08-27T09:06:29Z | 2010-08-27T09:06:29Z | An Architecture of Active Learning SVMs with Relevance Feedback for
Classifying E-mail | In this paper, we have proposed an architecture of active learning SVMs with
relevance feedback (RF)for classifying e-mail. This architecture combines both
active learning strategies where instead of using a randomly selected training
set, the learner has access to a pool of unlabeled instances and can request
the labels of some number of them and relevance feedback where if any mail
misclassified then the next set of support vectors will be different from the
present set otherwise the next set will not change. Our proposed architecture
will ensure that a legitimate e-mail will not be dropped in the event of
overflowing mailbox. The proposed architecture also exhibits dynamic updating
characteristics making life as difficult for the spammer as possible.
| [
"['Md. Saiful Islam' 'Md. Iftekharul Amin']",
"Md. Saiful Islam and Md. Iftekharul Amin"
] |
stat.ML cs.LG physics.comp-ph physics.data-an | 10.1063/1.3573612 | 1008.4973 | null | null | http://arxiv.org/abs/1008.4973v1 | 2010-08-29T23:37:19Z | 2010-08-29T23:37:19Z | Entropy-Based Search Algorithm for Experimental Design | The scientific method relies on the iterated processes of inference and
inquiry. The inference phase consists of selecting the most probable models
based on the available data; whereas the inquiry phase consists of using what
is known about the models to select the most relevant experiment. Optimizing
inquiry involves searching the parameterized space of experiments to select the
experiment that promises, on average, to be maximally informative. In the case
where it is important to learn about each of the model parameters, the
relevance of an experiment is quantified by Shannon entropy of the distribution
of experimental outcomes predicted by a probable set of models. If the set of
potential experiments is described by many parameters, we must search this
high-dimensional entropy space. Brute force search methods will be slow and
computationally expensive. We present an entropy-based search algorithm, called
nested entropy sampling, to select the most informative experiment for
efficient experimental design. This algorithm is inspired by Skilling's nested
sampling algorithm used in inference and borrows the concept of a rising
threshold while a set of experiment samples are maintained. We demonstrate that
this algorithm not only selects highly relevant experiments, but also is more
efficient than brute force search. Such entropic search techniques promise to
greatly benefit autonomous experimental design.
| [
"N. K. Malakar and K. H. Knuth",
"['N. K. Malakar' 'K. H. Knuth']"
] |
cs.IT cs.AI cs.LG math.IT | null | 1008.5078 | null | null | http://arxiv.org/pdf/1008.5078v1 | 2010-08-30T13:21:49Z | 2010-08-30T13:21:49Z | Prediction by Compression | It is well known that text compression can be achieved by predicting the next
symbol in the stream of text data based on the history seen up to the current
symbol. The better the prediction the more skewed the conditional probability
distribution of the next symbol and the shorter the codeword that needs to be
assigned to represent this next symbol. What about the opposite direction ?
suppose we have a black box that can compress text stream. Can it be used to
predict the next symbol in the stream ? We introduce a criterion based on the
length of the compressed data and use it to predict the next symbol. We examine
empirically the prediction error rate and its dependency on some compression
parameters.
| [
"Joel Ratsaby",
"['Joel Ratsaby']"
] |
cs.LG math.OC stat.CO stat.ML | 10.1109/TNN.2011.2164096 | 1008.5090 | null | null | http://arxiv.org/abs/1008.5090v1 | 2010-08-30T14:39:57Z | 2010-08-30T14:39:57Z | Fixed-point and coordinate descent algorithms for regularized kernel
methods | In this paper, we study two general classes of optimization algorithms for
kernel methods with convex loss function and quadratic norm regularization, and
analyze their convergence. The first approach, based on fixed-point iterations,
is simple to implement and analyze, and can be easily parallelized. The second,
based on coordinate descent, exploits the structure of additively separable
loss functions to compute solutions of line searches in closed form. Instances
of these general classes of algorithms are already incorporated into state of
the art machine learning software for large scale problems. We start from a
solution characterization of the regularized problem, obtained using
sub-differential calculus and resolvents of monotone operators, that holds for
general convex loss functions regardless of differentiability. The two
methodologies described in the paper can be regarded as instances of non-linear
Jacobi and Gauss-Seidel algorithms, and are both well-suited to solve large
scale problems.
| [
"['Francesco Dinuzzo']",
"Francesco Dinuzzo"
] |
cs.DS cs.LG | 10.1016/j.jda.2011.10.002 | 1008.5105 | null | null | http://arxiv.org/abs/1008.5105v5 | 2011-05-21T20:48:26Z | 2010-08-30T16:09:24Z | Indexability, concentration, and VC theory | Degrading performance of indexing schemes for exact similarity search in high
dimensions has long since been linked to histograms of distributions of
distances and other 1-Lipschitz functions getting concentrated. We discuss this
observation in the framework of the phenomenon of concentration of measure on
the structures of high dimension and the Vapnik-Chervonenkis theory of
statistical learning.
| [
"['Vladimir Pestov']",
"Vladimir Pestov"
] |
cs.LG cs.AI cs.AR | 10.1109/TFUZZ.2011.2160024 | 1008.5133 | null | null | http://arxiv.org/abs/1008.5133v2 | 2010-09-02T15:56:15Z | 2010-08-22T16:44:23Z | Memristor Crossbar-based Hardware Implementation of IDS Method | Ink Drop Spread (IDS) is the engine of Active Learning Method (ALM), which is
the methodology of soft computing. IDS, as a pattern-based processing unit,
extracts useful information from a system subjected to modeling. In spite of
its excellent potential in solving problems such as classification and modeling
compared to other soft computing tools, finding its simple and fast hardware
implementation is still a challenge. This paper describes a new hardware
implementation of IDS method based on the memristor crossbar structure. In
addition of simplicity, being completely real-time, having low latency and the
ability to continue working after the occurrence of power breakdown are some of
the advantages of our proposed circuit.
| [
"Farnood Merrikh-Bayat, Saeed Bagheri-Shouraki, and Ali Rohani",
"['Farnood Merrikh-Bayat' 'Saeed Bagheri-Shouraki' 'Ali Rohani']"
] |
math.OC cs.LG | null | 1008.5204 | null | null | http://arxiv.org/pdf/1008.5204v2 | 2011-06-30T18:14:40Z | 2010-08-31T02:42:32Z | A Smoothing Stochastic Gradient Method for Composite Optimization | We consider the unconstrained optimization problem whose objective function
is composed of a smooth and a non-smooth conponents where the smooth component
is the expectation a random function. This type of problem arises in some
interesting applications in machine learning. We propose a stochastic gradient
descent algorithm for this class of optimization problem. When the non-smooth
component has a particular structure, we propose another stochastic gradient
descent algorithm by incorporating a smoothing method into our first algorithm.
The proofs of the convergence rates of these two algorithms are given and we
show the numerical performance of our algorithm by applying them to regularized
linear regression problems with different sets of synthetic data.
| [
"['Qihang Lin' 'Xi Chen' 'Javier Pena']",
"Qihang Lin, Xi Chen and Javier Pena"
] |
cs.LG stat.ML | null | 1008.5209 | null | null | http://arxiv.org/pdf/1008.5209v1 | 2010-08-31T03:39:49Z | 2010-08-31T03:39:49Z | Network Flow Algorithms for Structured Sparsity | We consider a class of learning problems that involve a structured
sparsity-inducing norm defined as the sum of $\ell_\infty$-norms over groups of
variables. Whereas a lot of effort has been put in developing fast optimization
methods when the groups are disjoint or embedded in a specific hierarchical
structure, we address here the case of general overlapping groups. To this end,
we show that the corresponding optimization problem is related to network flow
optimization. More precisely, the proximal problem associated with the norm we
consider is dual to a quadratic min-cost flow problem. We propose an efficient
procedure which computes its solution exactly in polynomial time. Our algorithm
scales up to millions of variables, and opens up a whole new range of
applications for structured sparse models. We present several experiments on
image and video data, demonstrating the applicability and scalability of our
approach for various problems.
| [
"Julien Mairal (INRIA Rocquencourt, LIENS), Rodolphe Jenatton (INRIA\n Rocquencourt, LIENS), Guillaume Obozinski (INRIA Rocquencourt, LIENS),\n Francis Bach (INRIA Rocquencourt, LIENS)",
"['Julien Mairal' 'Rodolphe Jenatton' 'Guillaume Obozinski' 'Francis Bach']"
] |
math.OC cs.IT cs.LG math.IT | null | 1008.5231 | null | null | http://arxiv.org/pdf/1008.5231v3 | 2011-08-05T16:03:03Z | 2010-08-31T07:07:27Z | The adaptive projected subgradient method constrained by families of
quasi-nonexpansive mappings and its application to online learning | Many online, i.e., time-adaptive, inverse problems in signal processing and
machine learning fall under the wide umbrella of the asymptotic minimization of
a sequence of non-negative, convex, and continuous functions. To incorporate
a-priori knowledge into the design, the asymptotic minimization task is usually
constrained on a fixed closed convex set, which is dictated by the available
a-priori information. To increase versatility towards the usage of the
available information, the present manuscript extends the Adaptive Projected
Subgradient Method (APSM) by introducing an algorithmic scheme which
incorporates a-priori knowledge in the design via a sequence of strongly
attracting quasi-nonexpansive mappings in a real Hilbert space. In such a way,
the benefits offered to online learning tasks by the proposed method unfold in
two ways: 1) the rich class of quasi-nonexpansive mappings provides a plethora
of ways to cast a-priori knowledge, and 2) by introducing a sequence of such
mappings, the proposed scheme is able to capture the time-varying nature of
a-priori information. The convergence properties of the algorithm are studied,
several special cases of the method with wide applicability are shown, and the
potential of the proposed scheme is demonstrated by considering an increasingly
important, nowadays, online sparse system/signal recovery task.
| [
"Konstantinos Slavakis and Isao Yamada",
"['Konstantinos Slavakis' 'Isao Yamada']"
] |
cs.LG cs.IT math.IT | null | 1008.5325 | null | null | http://arxiv.org/pdf/1008.5325v4 | 2011-03-21T15:54:54Z | 2010-08-31T14:31:57Z | Inference with Multivariate Heavy-Tails in Linear Models | Heavy-tailed distributions naturally occur in many real life problems.
Unfortunately, it is typically not possible to compute inference in closed-form
in graphical models which involve such heavy-tailed distributions.
In this work, we propose a novel simple linear graphical model for
independent latent random variables, called linear characteristic model (LCM),
defined in the characteristic function domain. Using stable distributions, a
heavy-tailed family of distributions which is a generalization of Cauchy,
L\'evy and Gaussian distributions, we show for the first time, how to compute
both exact and approximate inference in such a linear multivariate graphical
model. LCMs are not limited to stable distributions, in fact LCMs are always
defined for any random variables (discrete, continuous or a mixture of both).
We provide a realistic problem from the field of computer networks to
demonstrate the applicability of our construction. Other potential application
is iterative decoding of linear channels with non-Gaussian noise.
| [
"Danny Bickson and Carlos Guestrin",
"['Danny Bickson' 'Carlos Guestrin']"
] |
math.OC cs.CV cs.IT cs.LG cs.NA math.IT stat.ME | null | 1008.5372 | null | null | http://arxiv.org/pdf/1008.5372v2 | 2012-05-11T17:12:02Z | 2010-08-31T17:24:31Z | Penalty Decomposition Methods for $L0$-Norm Minimization | In this paper we consider general l0-norm minimization problems, that is, the
problems with l0-norm appearing in either objective function or constraint. In
particular, we first reformulate the l0-norm constrained problem as an
equivalent rank minimization problem and then apply the penalty decomposition
(PD) method proposed in [33] to solve the latter problem. By utilizing the
special structures, we then transform all matrix operations of this method to
vector operations and obtain a PD method that only involves vector operations.
Under some suitable assumptions, we establish that any accumulation point of
the sequence generated by the PD method satisfies a first-order optimality
condition that is generally stronger than one natural optimality condition. We
further extend the PD method to solve the problem with the l0-norm appearing in
objective function. Finally, we test the performance of our PD methods by
applying them to compressed sensing, sparse logistic regression and sparse
inverse covariance selection. The computational results demonstrate that our
methods generally outperform the existing methods in terms of solution quality
and/or speed.
| [
"['Zhaosong Lu' 'Yong Zhang']",
"Zhaosong Lu and Yong Zhang"
] |
math.OC cs.LG cs.NA cs.SY q-fin.CP q-fin.ST | null | 1008.5373 | null | null | http://arxiv.org/pdf/1008.5373v4 | 2012-05-29T16:08:51Z | 2010-08-31T17:25:01Z | Penalty Decomposition Methods for Rank Minimization | In this paper we consider general rank minimization problems with rank
appearing in either objective function or constraint. We first establish that a
class of special rank minimization problems has closed-form solutions. Using
this result, we then propose penalty decomposition methods for general rank
minimization problems in which each subproblem is solved by a block coordinate
descend method. Under some suitable assumptions, we show that any accumulation
point of the sequence generated by the penalty decomposition methods satisfies
the first-order optimality conditions of a nonlinear reformulation of the
problems. Finally, we test the performance of our methods by applying them to
the matrix completion and nearest low-rank correlation matrix problems. The
computational results demonstrate that our methods are generally comparable or
superior to the existing methods in terms of solution quality.
| [
"['Zhaosong Lu' 'Yong Zhang']",
"Zhaosong Lu and Yong Zhang"
] |
stat.ML cs.LG | null | 1008.5386 | null | null | http://arxiv.org/pdf/1008.5386v1 | 2010-08-31T18:51:43Z | 2010-08-31T18:51:43Z | Mixed Cumulative Distribution Networks | Directed acyclic graphs (DAGs) are a popular framework to express
multivariate probability distributions. Acyclic directed mixed graphs (ADMGs)
are generalizations of DAGs that can succinctly capture much richer sets of
conditional independencies, and are especially useful in modeling the effects
of latent variables implicitly. Unfortunately there are currently no good
parameterizations of general ADMGs. In this paper, we apply recent work on
cumulative distribution networks and copulas to propose one one general
construction for ADMG models. We consider a simple parameter estimation
approach, and report some encouraging experimental results.
| [
"Ricardo Silva and Charles Blundell and Yee Whye Teh",
"['Ricardo Silva' 'Charles Blundell' 'Yee Whye Teh']"
] |
q-bio.QM cs.CE cs.LG q-bio.GN | null | 1008.5390 | null | null | http://arxiv.org/pdf/1008.5390v1 | 2010-08-31T18:54:33Z | 2010-08-31T18:54:33Z | Applications of Machine Learning Methods to Quantifying Phenotypic
Traits that Distinguish the Wild Type from the Mutant Arabidopsis Thaliana
Seedlings during Root Gravitropism | Post-genomic research deals with challenging problems in screening genomes of
organisms for particular functions or potential for being the targets of
genetic engineering for desirable biological features. 'Phenotyping' of wild
type and mutants is a time-consuming and costly effort by many individuals.
This article is a preliminary progress report in research on large-scale
automation of phenotyping steps (imaging, informatics and data analysis) needed
to study plant gene-proteins networks that influence growth and development of
plants. Our results undermine the significance of phenotypic traits that are
implicit in patterns of dynamics in plant root response to sudden changes of
its environmental conditions, such as sudden re-orientation of the root tip
against the gravity vector. Including dynamic features besides the common
morphological ones has paid off in design of robust and accurate machine
learning methods to automate a typical phenotyping scenario, i.e. to
distinguish the wild type from the mutants.
| [
"Hesam T. Dashti, Jernej Tonejc, Adel Ardalan, Alireza F. Siahpirani,\n Sabrina Guettes, Zohreh Sharif, Liya Wang, Amir H. Assadi",
"['Hesam T. Dashti' 'Jernej Tonejc' 'Adel Ardalan' 'Alireza F. Siahpirani'\n 'Sabrina Guettes' 'Zohreh Sharif' 'Liya Wang' 'Amir H. Assadi']"
] |
cs.LG | null | 1009.0117 | null | null | http://arxiv.org/pdf/1009.0117v1 | 2010-09-01T08:29:49Z | 2010-09-01T08:29:49Z | Exploring Language-Independent Emotional Acoustic Features via Feature
Selection | We propose a novel feature selection strategy to discover
language-independent acoustic features that tend to be responsible for emotions
regardless of languages, linguistics and other factors. Experimental results
suggest that the language-independent feature subset discovered yields the
performance comparable to the full feature set on various emotional speech
corpora.
| [
"['Arslan Shaukat' 'Ke Chen']",
"Arslan Shaukat and Ke Chen"
] |
cs.LG | null | 1009.0306 | null | null | http://arxiv.org/pdf/1009.0306v1 | 2010-09-02T00:25:58Z | 2010-09-02T00:25:58Z | Fast Overlapping Group Lasso | The group Lasso is an extension of the Lasso for feature selection on
(predefined) non-overlapping groups of features. The non-overlapping group
structure limits its applicability in practice. There have been several recent
attempts to study a more general formulation, where groups of features are
given, potentially with overlaps between the groups. The resulting optimization
is, however, much more challenging to solve due to the group overlaps. In this
paper, we consider the efficient optimization of the overlapping group Lasso
penalized problem. We reveal several key properties of the proximal operator
associated with the overlapping group Lasso, and compute the proximal operator
by solving the smooth and convex dual problem, which allows the use of the
gradient descent type of algorithms for the optimization. We have performed
empirical evaluations using the breast cancer gene expression data set, which
consists of 8,141 genes organized into (overlapping) gene sets. Experimental
results demonstrate the efficiency and effectiveness of the proposed algorithm.
| [
"['Jun Liu' 'Jieping Ye']",
"Jun Liu and Jieping Ye"
] |
cs.LG cs.DS stat.ML | null | 1009.0499 | null | null | http://arxiv.org/pdf/1009.0499v1 | 2010-09-02T18:28:22Z | 2010-09-02T18:28:22Z | A PAC-Bayesian Analysis of Graph Clustering and Pairwise Clustering | We formulate weighted graph clustering as a prediction problem: given a
subset of edge weights we analyze the ability of graph clustering to predict
the remaining edge weights. This formulation enables practical and theoretical
comparison of different approaches to graph clustering as well as comparison of
graph clustering with other possible ways to model the graph. We adapt the
PAC-Bayesian analysis of co-clustering (Seldin and Tishby, 2008; Seldin, 2009)
to derive a PAC-Bayesian generalization bound for graph clustering. The bound
shows that graph clustering should optimize a trade-off between empirical data
fit and the mutual information that clusters preserve on the graph nodes. A
similar trade-off derived from information-theoretic considerations was already
shown to produce state-of-the-art results in practice (Slonim et al., 2005;
Yom-Tov and Slonim, 2009). This paper supports the empirical evidence by
providing a better theoretical foundation, suggesting formal generalization
guarantees, and offering a more accurate way to deal with finite sample issues.
We derive a bound minimization algorithm and show that it provides good results
in real-life problems and that the derived PAC-Bayesian bound is reasonably
tight.
| [
"['Yevgeny Seldin']",
"Yevgeny Seldin"
] |
cs.LG cs.AI | null | 1009.0605 | null | null | http://arxiv.org/pdf/1009.0605v2 | 2011-01-15T15:34:21Z | 2010-09-03T08:36:07Z | Gaussian Process Bandits for Tree Search: Theory and Application to
Planning in Discounted MDPs | We motivate and analyse a new Tree Search algorithm, GPTS, based on recent
theoretical advances in the use of Gaussian Processes for Bandit problems. We
consider tree paths as arms and we assume the target/reward function is drawn
from a GP distribution. The posterior mean and variance, after observing data,
are used to define confidence intervals for the function values, and we
sequentially play arms with highest upper confidence bounds. We give an
efficient implementation of GPTS and we adapt previous regret bounds by
determining the decay rate of the eigenvalues of the kernel matrix on the whole
set of tree paths. We consider two kernels in the feature space of binary
vectors indexed by the nodes of the tree: linear and Gaussian. The regret grows
in square root of the number of iterations T, up to a logarithmic factor, with
a constant that improves with bigger Gaussian kernel widths. We focus on
practical values of T, smaller than the number of arms. Finally, we apply GPTS
to Open Loop Planning in discounted Markov Decision Processes by modelling the
reward as a discounted sum of independent Gaussian Processes. We report similar
regret bounds to those of the OLOP algorithm.
| [
"['Louis Dorard' 'John Shawe-Taylor']",
"Louis Dorard and John Shawe-Taylor"
] |
stat.ML cs.AI cs.LG | null | 1009.0861 | null | null | http://arxiv.org/pdf/1009.0861v1 | 2010-09-04T19:18:54Z | 2010-09-04T19:18:54Z | On the Estimation of Coherence | Low-rank matrix approximations are often used to help scale standard machine
learning algorithms to large-scale problems. Recently, matrix coherence has
been used to characterize the ability to extract global information from a
subset of matrix entries in the context of these low-rank approximations and
other sampling-based algorithms, e.g., matrix com- pletion, robust PCA. Since
coherence is defined in terms of the singular vectors of a matrix and is
expensive to compute, the practical significance of these results largely
hinges on the following question: Can we efficiently and accurately estimate
the coherence of a matrix? In this paper we address this question. We propose a
novel algorithm for estimating coherence from a small number of columns,
formally analyze its behavior, and derive a new coherence-based matrix
approximation bound based on this analysis. We then present extensive
experimental results on synthetic and real datasets that corroborate our
worst-case theoretical analysis, yet provide strong support for the use of our
proposed algorithm whenever low-rank approximation is being considered. Our
algorithm efficiently and accurately estimates matrix coherence across a wide
range of datasets, and these coherence estimates are excellent predictors of
the effectiveness of sampling-based matrix approximation on a case-by-case
basis.
| [
"Mehryar Mohri, Ameet Talwalkar",
"['Mehryar Mohri' 'Ameet Talwalkar']"
] |
cs.CR cs.LG cs.NI | 10.1109/INFCOM.2011.5934995 | 1009.2275 | null | null | http://arxiv.org/abs/1009.2275v1 | 2010-09-12T23:55:00Z | 2010-09-12T23:55:00Z | PhishDef: URL Names Say It All | Phishing is an increasingly sophisticated method to steal personal user
information using sites that pretend to be legitimate. In this paper, we take
the following steps to identify phishing URLs. First, we carefully select
lexical features of the URLs that are resistant to obfuscation techniques used
by attackers. Second, we evaluate the classification accuracy when using only
lexical features, both automatically and hand-selected, vs. when using
additional features. We show that lexical features are sufficient for all
practical purposes. Third, we thoroughly compare several classification
algorithms, and we propose to use an online method (AROW) that is able to
overcome noisy training data. Based on the insights gained from our analysis,
we propose PhishDef, a phishing detection system that uses only URL names and
combines the above three elements. PhishDef is a highly accurate method (when
compared to state-of-the-art approaches over real datasets), lightweight (thus
appropriate for online and client-side deployment), proactive (based on online
classification rather than blacklists), and resilient to training data
inaccuracies (thus enabling the use of large noisy training data).
| [
"['Anh Le' 'Athina Markopoulou' 'Michalis Faloutsos']",
"Anh Le, Athina Markopoulou, Michalis Faloutsos"
] |
cs.LG | null | 1009.2566 | null | null | http://arxiv.org/pdf/1009.2566v1 | 2010-09-14T03:53:11Z | 2010-09-14T03:53:11Z | Reinforcement Learning by Comparing Immediate Reward | This paper introduces an approach to Reinforcement Learning Algorithm by
comparing their immediate rewards using a variation of Q-Learning algorithm.
Unlike the conventional Q-Learning, the proposed algorithm compares current
reward with immediate reward of past move and work accordingly. Relative reward
based Q-learning is an approach towards interactive learning. Q-Learning is a
model free reinforcement learning method that used to learn the agents. It is
observed that under normal circumstances algorithm take more episodes to reach
optimal Q-value due to its normal reward or sometime negative reward. In this
new form of algorithm agents select only those actions which have a higher
immediate reward signal in comparison to previous one. The contribution of this
article is the presentation of new Q-Learning Algorithm in order to maximize
the performance of algorithm and reduce the number of episode required to reach
optimal Q-value. Effectiveness of proposed algorithm is simulated in a 20 x20
Grid world deterministic environment and the result for the two forms of
Q-Learning Algorithms is given.
| [
"['Punit Pandey' 'Deepshikha Pandey' 'Shishir Kumar']",
"Punit Pandey, Deepshikha Pandey, Shishir Kumar"
] |
cs.LG | null | 1009.3240 | null | null | http://arxiv.org/pdf/1009.3240v2 | 2011-09-20T18:38:13Z | 2010-09-16T18:40:32Z | A Unified View of Regularized Dual Averaging and Mirror Descent with
Implicit Updates | We study three families of online convex optimization algorithms:
follow-the-proximally-regularized-leader (FTRL-Proximal), regularized dual
averaging (RDA), and composite-objective mirror descent. We first prove
equivalence theorems that show all of these algorithms are instantiations of a
general FTRL update. This provides theoretical insight on previous experimental
observations. In particular, even though the FOBOS composite mirror descent
algorithm handles L1 regularization explicitly, it has been observed that RDA
is even more effective at producing sparsity. Our results demonstrate that
FOBOS uses subgradient approximations to the L1 penalty from previous rounds,
leading to less sparsity than RDA, which handles the cumulative penalty in
closed form. The FTRL-Proximal algorithm can be seen as a hybrid of these two,
and outperforms both on a large, real-world dataset.
Our second contribution is a unified analysis which produces regret bounds
that match (up to logarithmic terms) or improve the best previously known
bounds. This analysis also extends these algorithms in two important ways: we
support a more general type of composite objective and we analyze implicit
updates, which replace the subgradient approximation of the current loss
function with an exact optimization.
| [
"['H. Brendan McMahan']",
"H. Brendan McMahan"
] |
Subsets and Splits