Unnamed: 0
int64 0
41k
| title
stringlengths 4
274
| category
stringlengths 5
18
| summary
stringlengths 22
3.66k
| theme
stringclasses 8
values |
---|---|---|---|---|
40,510 | Self-Organizing Time Map: An Abstraction of Temporal Multivariate
Patterns | cs.LG | This paper adopts and adapts Kohonen's standard Self-Organizing Map (SOM) for
exploratory temporal structure analysis. The Self-Organizing Time Map (SOTM)
implements SOM-type learning to one-dimensional arrays for individual time
units, preserves the orientation with short-term memory and arranges the arrays
in an ascending order of time. The two-dimensional representation of the SOTM
attempts thus twofold topology preservation, where the horizontal direction
preserves time topology and the vertical direction data topology. This enables
discovering the occurrence and exploring the properties of temporal structural
changes in data. For representing qualities and properties of SOTMs, we adapt
measures and visualizations from the standard SOM paradigm, as well as
introduce a measure of temporal structural changes. The functioning of the
SOTM, and its visualizations and quality and property measures, are illustrated
on artificial toy data. The usefulness of the SOTM in a real-world setting is
shown on poverty, welfare and development indicators. | computer science |
40,511 | Scaling Multiple-Source Entity Resolution using Statistically Efficient
Transfer Learning | cs.DB | We consider a serious, previously-unexplored challenge facing almost all
approaches to scaling up entity resolution (ER) to multiple data sources: the
prohibitive cost of labeling training data for supervised learning of
similarity scores for each pair of sources. While there exists a rich
literature describing almost all aspects of pairwise ER, this new challenge is
arising now due to the unprecedented ability to acquire and store data from
online sources, features driven by ER such as enriched search verticals, and
the uniqueness of noisy and missing data characteristics for each source. We
show on real-world and synthetic data that for state-of-the-art techniques, the
reality of heterogeneous sources means that the number of labeled training data
must scale quadratically in the number of sources, just to maintain constant
precision/recall. We address this challenge with a brand new transfer learning
algorithm which requires far less training data (or equivalently, achieves
superior accuracy with the same data) and is trained using fast convex
optimization. The intuition behind our approach is to adaptively share
structure learned about one scoring problem with all other scoring problems
sharing a data source in common. We demonstrate that our theoretically
motivated approach incurs no runtime cost while it can maintain constant
precision/recall with the cost of labeling increasing only linearly with the
number of sources. | computer science |
40,512 | Analysis of a Statistical Hypothesis Based Learning Mechanism for Faster
crawling | cs.LG | The growth of world-wide-web (WWW) spreads its wings from an intangible
quantities of web-pages to a gigantic hub of web information which gradually
increases the complexity of crawling process in a search engine. A search
engine handles a lot of queries from various parts of this world, and the
answers of it solely depend on the knowledge that it gathers by means of
crawling. The information sharing becomes a most common habit of the society,
and it is done by means of publishing structured, semi-structured and
unstructured resources on the web. This social practice leads to an exponential
growth of web-resource, and hence it became essential to crawl for continuous
updating of web-knowledge and modification of several existing resources in any
situation. In this paper one statistical hypothesis based learning mechanism is
incorporated for learning the behavior of crawling speed in different
environment of network, and for intelligently control of the speed of crawler.
The scaling technique is used to compare the performance proposed method with
the standard crawler. The high speed performance is observed after scaling, and
the retrieval of relevant web-resource in such a high speed is analyzed. | computer science |
40,513 | Metric distances derived from cosine similarity and Pearson and Spearman
correlations | stat.ME | We investigate two classes of transformations of cosine similarity and
Pearson and Spearman correlations into metric distances, utilising the simple
tool of metric-preserving functions. The first class puts anti-correlated
objects maximally far apart. Previously known transforms fall within this
class. The second class collates correlated and anti-correlated objects. An
example of such a transformation that yields a metric distance is the sine
function when applied to centered data. | computer science |
40,514 | A Learning Theoretic Approach to Energy Harvesting Communication System
Optimization | cs.LG | A point-to-point wireless communication system in which the transmitter is
equipped with an energy harvesting device and a rechargeable battery, is
studied. Both the energy and the data arrivals at the transmitter are modeled
as Markov processes. Delay-limited communication is considered assuming that
the underlying channel is block fading with memory, and the instantaneous
channel state information is available at both the transmitter and the
receiver. The expected total transmitted data during the transmitter's
activation time is maximized under three different sets of assumptions
regarding the information available at the transmitter about the underlying
stochastic processes. A learning theoretic approach is introduced, which does
not assume any a priori information on the Markov processes governing the
communication system. In addition, online and offline optimization problems are
studied for the same setting. Full statistical knowledge and causal information
on the realizations of the underlying stochastic processes are assumed in the
online optimization problem, while the offline optimization problem assumes
non-causal knowledge of the realizations in advance. Comparing the optimal
solutions in all three frameworks, the performance loss due to the lack of the
transmitter's information regarding the behaviors of the underlying Markov
processes is quantified. | computer science |
40,515 | Identification of Probabilities of Languages | cs.LG | We consider the problem of inferring the probability distribution associated
with a language, given data consisting of an infinite sequence of elements of
the languge. We do this under two assumptions on the algorithms concerned: (i)
like a real-life algorothm it has round-off errors, and (ii) it has no
round-off errors. Assuming (i) we (a) consider a probability mass function of
the elements of the language if the data are drawn independent identically
distributed (i.i.d.), provided the probability mass function is computable and
has a finite expectation. We give an effective procedure to almost surely
identify in the limit the target probability mass function using the Strong Law
of Large Numbers. Second (b) we treat the case of possibly incomputable
probabilistic mass functions in the above setting. In this case we can only
pointswize converge to the target probability mass function almost surely.
Third (c) we consider the case where the data are dependent assuming they are
typical for at least one computable measure and the language is finite. There
is an effective procedure to identify by infinite recurrence a nonempty subset
of the computable measures according to which the data is typical. Here we use
the theory of Kolmogorov complexity. Assuming (ii) we obtain the weaker result
for (a) that the target distribution is identified by infinite recurrence
almost surely; (b) stays the same as under assumption (i). We consider the
associated predictions. | computer science |
40,516 | Fixed-rank matrix factorizations and Riemannian low-rank optimization | cs.LG | Motivated by the problem of learning a linear regression model whose
parameter is a large fixed-rank non-symmetric matrix, we consider the
optimization of a smooth cost function defined on the set of fixed-rank
matrices. We adopt the geometric framework of optimization on Riemannian
quotient manifolds. We study the underlying geometries of several well-known
fixed-rank matrix factorizations and then exploit the Riemannian quotient
geometry of the search space in the design of a class of gradient descent and
trust-region algorithms. The proposed algorithms generalize our previous
results on fixed-rank symmetric positive semidefinite matrices, apply to a
broad range of applications, scale to high-dimensional problems and confer a
geometric basis to recent contributions on the learning of fixed-rank
non-symmetric matrices. We make connections with existing algorithms in the
context of low-rank matrix completion and discuss relative usefulness of the
proposed framework. Numerical experiments suggest that the proposed algorithms
compete with the state-of-the-art and that manifold optimization offers an
effective and versatile framework for the design of machine learning algorithms
that learn a fixed-rank matrix. | computer science |
40,517 | Design of Spectrum Sensing Policy for Multi-user Multi-band Cognitive
Radio Network | cs.LG | Finding an optimal sensing policy for a particular access policy and sensing
scheme is a laborious combinatorial problem that requires the system model
parameters to be known. In practise the parameters or the model itself may not
be completely known making reinforcement learning methods appealing. In this
paper a non-parametric reinforcement learning-based method is developed for
sensing and accessing multi-band radio spectrum in multi-user cognitive radio
networks. A suboptimal sensing policy search algorithm is proposed for a
particular multi-user multi-band access policy and the randomized
Chair-Varshney rule. The randomized Chair-Varshney rule is used to reduce the
probability of false alarms under a constraint on the probability of detection
that protects the primary user. The simulation results show that the proposed
method achieves a sum profit (e.g. data rate) close to the optimal sensing
policy while achieving the desired probability of detection. | computer science |
40,518 | Securing Your Transactions: Detecting Anomalous Patterns In XML
Documents | cs.CR | XML transactions are used in many information systems to store data and
interact with other systems. Abnormal transactions, the result of either an
on-going cyber attack or the actions of a benign user, can potentially harm the
interacting systems and therefore they are regarded as a threat. In this paper
we address the problem of anomaly detection and localization in XML
transactions using machine learning techniques. We present a new XML anomaly
detection framework, XML-AD. Within this framework, an automatic method for
extracting features from XML transactions was developed as well as a practical
method for transforming XML features into vectors of fixed dimensionality. With
these two methods in place, the XML-AD framework makes it possible to utilize
general learning algorithms for anomaly detection. Central to the functioning
of the framework is a novel multi-univariate anomaly detection algorithm,
ADIFA. The framework was evaluated on four XML transactions datasets, captured
from real information systems, in which it achieved over 89% true positive
detection rate with less than a 0.2% false positive rate. | computer science |
40,519 | Active Learning for Crowd-Sourced Databases | cs.LG | Crowd-sourcing has become a popular means of acquiring labeled data for a
wide variety of tasks where humans are more accurate than computers, e.g.,
labeling images, matching objects, or analyzing sentiment. However, relying
solely on the crowd is often impractical even for data sets with thousands of
items, due to time and cost constraints of acquiring human input (which cost
pennies and minutes per label). In this paper, we propose algorithms for
integrating machine learning into crowd-sourced databases, with the goal of
allowing crowd-sourcing applications to scale, i.e., to handle larger datasets
at lower costs. The key observation is that, in many of the above tasks, humans
and machine learning algorithms can be complementary, as humans are often more
accurate but slow and expensive, while algorithms are usually less accurate,
but faster and cheaper.
Based on this observation, we present two new active learning algorithms to
combine humans and algorithms together in a crowd-sourced database. Our
algorithms are based on the theory of non-parametric bootstrap, which makes our
results applicable to a broad class of machine learning models. Our results, on
three real-life datasets collected with Amazon's Mechanical Turk, and on 15
well-known UCI data sets, show that our methods on average ask humans to label
one to two orders of magnitude fewer items to achieve the same accuracy as a
baseline that labels random images, and two to eight times fewer questions than
previous active learning schemes. | computer science |
40,520 | On the Sensitivity of Shape Fitting Problems | cs.CG | In this article, we study shape fitting problems, $\epsilon$-coresets, and
total sensitivity. We focus on the $(j,k)$-projective clustering problems,
including $k$-median/$k$-means, $k$-line clustering, $j$-subspace
approximation, and the integer $(j,k)$-projective clustering problem. We derive
upper bounds of total sensitivities for these problems, and obtain
$\epsilon$-coresets using these upper bounds. Using a dimension-reduction type
argument, we are able to greatly simplify earlier results on total sensitivity
for the $k$-median/$k$-means clustering problems, and obtain
positively-weighted $\epsilon$-coresets for several variants of the
$(j,k)$-projective clustering problem. We also extend an earlier result on
$\epsilon$-coresets for the integer $(j,k)$-projective clustering problem in
fixed dimension to the case of high dimension. | computer science |
40,521 | Locality-Sensitive Hashing with Margin Based Feature Selection | cs.LG | We propose a learning method with feature selection for Locality-Sensitive
Hashing. Locality-Sensitive Hashing converts feature vectors into bit arrays.
These bit arrays can be used to perform similarity searches and personal
authentication. The proposed method uses bit arrays longer than those used in
the end for similarity and other searches and by learning selects the bits that
will be used. We demonstrated this method can effectively perform optimization
for cases such as fingerprint images with a large number of labels and
extremely few data that share the same labels, as well as verifying that it is
also effective for natural images, handwritten digits, and speech features. | computer science |
40,522 | Learning Robust Low-Rank Representations | cs.LG | In this paper we present a comprehensive framework for learning robust
low-rank representations by combining and extending recent ideas for learning
fast sparse coding regressors with structured non-convex optimization
techniques. This approach connects robust principal component analysis (RPCA)
with dictionary learning techniques and allows its approximation via trainable
encoders. We propose an efficient feed-forward architecture derived from an
optimization algorithm designed to exactly solve robust low dimensional
projections. This architecture, in combination with different training
objective functions, allows the regressors to be used as online approximants of
the exact offline RPCA problem or as RPCA-based neural networks. Simple
modifications of these encoders can handle challenging extensions, such as the
inclusion of geometric data transformations. We present several examples with
real data from image, audio, and video processing. When used to approximate
RPCA, our basic implementation shows several orders of magnitude speedup
compared to the exact solvers with almost no performance degradation. We show
the strength of the inclusion of learning to the RPCA approach on a music
source separation application, where the encoders outperform the exact RPCA
algorithms, which are already reported to produce state-of-the-art results on a
benchmark database. Our preliminary implementation on an iPad shows
faster-than-real-time performance with minimal latency. | computer science |
40,523 | Gene selection with guided regularized random forest | cs.LG | The regularized random forest (RRF) was recently proposed for feature
selection by building only one ensemble. In RRF the features are evaluated on a
part of the training data at each tree node. We derive an upper bound for the
number of distinct Gini information gain values in a node, and show that many
features can share the same information gain at a node with a small number of
instances and a large number of features. Therefore, in a node with a small
number of instances, RRF is likely to select a feature not strongly relevant.
Here an enhanced RRF, referred to as the guided RRF (GRRF), is proposed. In
GRRF, the importance scores from an ordinary random forest (RF) are used to
guide the feature selection process in RRF. Experiments on 10 gene data sets
show that the accuracy performance of GRRF is, in general, more robust than RRF
when their parameters change. GRRF is computationally efficient, can select
compact feature subsets, and has competitive accuracy performance, compared to
RRF, varSelRF and LASSO logistic regression (with evaluations from an RF
classifier). Also, RF applied to the features selected by RRF with the minimal
regularization outperforms RF applied to all the features for most of the data
sets considered here. Therefore, if accuracy is considered more important than
the size of the feature subset, RRF with the minimal regularization may be
considered. We use the accuracy performance of RF, a strong classifier, to
evaluate feature selection methods, and illustrate that weak classifiers are
less capable of capturing the information contained in a feature subset. Both
RRF and GRRF were implemented in the "RRF" R package available at CRAN, the
official R package archive. | computer science |
40,524 | CloudSVM : Training an SVM Classifier in Cloud Computing Systems | cs.LG | In conventional method, distributed support vector machines (SVM) algorithms
are trained over pre-configured intranet/internet environments to find out an
optimal classifier. These methods are very complicated and costly for large
datasets. Hence, we propose a method that is referred as the Cloud SVM training
mechanism (CloudSVM) in a cloud computing environment with MapReduce technique
for distributed machine learning applications. Accordingly, (i) SVM algorithm
is trained in distributed cloud storage servers that work concurrently; (ii)
merge all support vectors in every trained cloud node; and (iii) iterate these
two steps until the SVM converges to the optimal classifier function. Large
scale data sets are not possible to train using SVM algorithm on a single
computer. The results of this study are important for training of large scale
data sets for machine learning applications. We provided that iterative
training of splitted data set in cloud computing environment using SVM will
converge to a global optimal classifier in finite iteration size. | computer science |
40,525 | An Efficient Algorithm for Upper Bound on the Partition Function of
Nucleic Acids | cs.LG | It has been shown that minimum free energy structure for RNAs and RNA-RNA
interaction is often incorrect due to inaccuracies in the energy parameters and
inherent limitations of the energy model. In contrast, ensemble based
quantities such as melting temperature and equilibrium concentrations can be
more reliably predicted. Even structure prediction by sampling from the
ensemble and clustering those structures by Sfold [7] has proven to be more
reliable than minimum free energy structure prediction. The main obstacle for
ensemble based approaches is the computational complexity of the partition
function and base pairing probabilities. For instance, the space complexity of
the partition function for RNA-RNA interaction is $O(n^4)$ and the time
complexity is $O(n^6)$ which are prohibitively large [4,12]. Our goal in this
paper is to give a fast algorithm, based on sparse folding, to calculate an
upper bound on the partition function. Our work is based on the recent
algorithm of Hazan and Jaakkola [10]. The space complexity of our algorithm is
the same as that of sparse folding algorithms, and the time complexity of our
algorithm is $O(MFE(n)\ell)$ for single RNA and $O(MFE(m, n)\ell)$ for RNA-RNA
interaction in practice, in which $MFE$ is the running time of sparse folding
and $\ell \leq n$ ($\ell \leq n + m$) is a sequence dependent parameter. | computer science |
40,526 | Symmetric Collaborative Filtering Using the Noisy Sensor Model | cs.IR | Collaborative filtering is the process of making recommendations regarding
the potential preference of a user, for example shopping on the Internet, based
on the preference ratings of the user and a number of other users for various
items. This paper considers collaborative filtering based on
explicitmulti-valued ratings. To evaluate the algorithms, weconsider only {em
pure} collaborative filtering, using ratings exclusively, and no other
information about the people or items.Our approach is to predict a user's
preferences regarding a particularitem by using other people who rated that
item and other items ratedby the user as noisy sensors. The noisy sensor model
uses Bayes' theorem to compute the probability distribution for the
user'srating of a new item. We give two variant models: in one, we learn a{em
classical normal linear regression} model of how users rate items; in
another,we assume different users rate items the same, but the accuracy of
thesensors needs to be learned. We compare these variant models
withstate-of-the-art techniques and show how they are significantly
better,whether a user has rated only two items or many. We reportempirical
results using the EachMovie database
footnote{http://research.compaq.com/SRC/eachmovie/} of movie ratings. Wealso
show that by considering items similarity along with theusers similarity, the
accuracy of the prediction increases. | computer science |
40,527 | A comparison of SVM and RVM for Document Classification | cs.IR | Document classification is a task of assigning a new unclassified document to
one of the predefined set of classes. The content based document classification
uses the content of the document with some weighting criteria to assign it to
one of the predefined classes. It is a major task in library science,
electronic document management systems and information sciences. This paper
investigates document classification by using two different classification
techniques (1) Support Vector Machine (SVM) and (2) Relevance Vector Machine
(RVM). SVM is a supervised machine learning technique that can be used for
classification task. In its basic form, SVM represents the instances of the
data into space and tries to separate the distinct classes by a maximum
possible wide gap (hyper plane) that separates the classes. On the other hand
RVM uses probabilistic measure to define this separation space. RVM uses
Bayesian inference to obtain succinct solution, thus RVM uses significantly
fewer basis functions. Experimental studies on three standard text
classification datasets reveal that although RVM takes more training time, its
classification is much better as compared to SVM. | computer science |
40,528 | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | cs.NA | Non-negative matrix factorization (NMF) has become a popular machine learning
approach to many problems in text mining, speech and image processing,
bio-informatics and seismic data analysis to name a few. In NMF, a matrix of
non-negative data is approximated by the low-rank product of two matrices with
non-negative entries. In this paper, the approximation quality is measured by
the Kullback-Leibler divergence between the data and its low-rank
reconstruction. The existence of the simple multiplicative update (MU)
algorithm for computing the matrix factors has contributed to the success of
NMF. Despite the availability of algorithms showing faster convergence, MU
remains popular due to its simplicity. In this paper, a diagonalized Newton
algorithm (DNA) is proposed showing faster convergence while the implementation
remains simple and suitable for high-rank problems. The DNA algorithm is
applied to various publicly available data sets, showing a substantial speed-up
on modern hardware. | computer science |
40,529 | Block Coordinate Descent for Sparse NMF | cs.LG | Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data
analysis. An important variant is the sparse NMF problem which arises when we
explicitly require the learnt features to be sparse. A natural measure of
sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms,
such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based
on intuitive attributes that such measures need to satisfy. This is in contrast
to computationally cheaper alternatives such as the plain L$_1$ norm. However,
present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow
and other formulations for sparse NMF have been proposed such as those based on
L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm
sparsity constraints while not sacrificing computation time. We present
experimental evidence on real-world datasets that shows our new algorithm
performs an order of magnitude faster compared to the current state-of-the-art
solvers optimizing the mixed norm and is suitable for large-scale datasets. | computer science |
40,530 | Revisiting Natural Gradient for Deep Networks | cs.LG | We evaluate natural gradient, an algorithm originally proposed in Amari
(1997), for learning deep models. The contributions of this paper are as
follows. We show the connection between natural gradient and three other
recently proposed methods for training deep models: Hessian-Free (Martens,
2010), Krylov Subspace Descent (Vinyals and Povey, 2012) and TONGA (Le Roux et
al., 2008). We describe how one can use unlabeled data to improve the
generalization error obtained by natural gradient and empirically evaluate the
robustness of the algorithm to the ordering of the training set compared to
stochastic gradient descent. Finally we extend natural gradient to incorporate
second order information alongside the manifold information and provide a
benchmark of the new algorithm using a truncated Newton approach for inverting
the metric matrix instead of using a diagonal approximation of it. | computer science |
40,531 | Empirical Analysis of Predictive Algorithms for Collaborative Filtering | cs.IR | Collaborative filtering or recommender systems use a database about user
preferences to predict additional topics or products a new user might like. In
this paper we describe several algorithms designed for this task, including
techniques based on correlation coefficients, vector-based similarity
calculations, and statistical Bayesian methods. We compare the predictive
accuracy of the various methods in a set of representative problem domains. We
use two basic classes of evaluation metrics. The first characterizes accuracy
over a set of individual predictions in terms of average absolute deviation.
The second estimates the utility of a ranked list of suggested items. This
metric uses an estimate of the probability that a user will see a
recommendation in an ordered list. Experiments were run for datasets associated
with 3 application areas, 4 experimental protocols, and the 2 evaluation
metrics for the various algorithms. Results indicate that for a wide range of
conditions, Bayesian networks with decision trees at each node and correlation
methods outperform Bayesian-clustering and vector-similarity methods. Between
correlation and Bayesian networks, the preferred method depends on the nature
of the dataset, nature of the application (ranked versus one-by-one
presentation), and the availability of votes with which to make predictions.
Other considerations include the size of database, speed of predictions, and
learning time. | computer science |
40,532 | Generalization Bounds for Domain Adaptation | cs.LG | In this paper, we provide a new framework to obtain the generalization bounds
of the learning process for domain adaptation, and then apply the derived
bounds to analyze the asymptotical convergence of the learning process. Without
loss of generality, we consider two kinds of representative domain adaptation:
one is with multiple sources and the other is combining source and target data.
In particular, we use the integral probability metric to measure the
difference between two domains. For either kind of domain adaptation, we
develop a related Hoeffding-type deviation inequality and a symmetrization
inequality to achieve the corresponding generalization bound based on the
uniform entropy number. We also generalized the classical McDiarmid's
inequality to a more general setting where independent random variables can
take values from different domains. By using this inequality, we then obtain
generalization bounds based on the Rademacher complexity. Afterwards, we
analyze the asymptotic convergence and the rate of convergence of the learning
process for such kind of domain adaptation. Meanwhile, we discuss the factors
that affect the asymptotic behavior of the learning process and the numerical
experiments support our theoretical findings as well. | computer science |
40,533 | Image Retrieval based on Bag-of-Words model | cs.IR | This article gives a survey for bag-of-words (BoW) or bag-of-features model
in image retrieval system. In recent years, large-scale image retrieval shows
significant potential in both industry applications and research problems. As
local descriptors like SIFT demonstrate great discriminative power in solving
vision problems like object recognition, image classification and annotation,
more and more state-of-the-art large scale image retrieval systems are trying
to rely on them. A common way to achieve this is first quantizing local
descriptors into visual words, and then applying scalable textual indexing and
retrieval schemes. We call this model as bag-of-words or bag-of-features model.
The goal of this survey is to give an overview of this model and introduce
different strategies when building the system based on this model. | computer science |
40,534 | North Atlantic Right Whale Contact Call Detection | cs.LG | The North Atlantic right whale (Eubalaena glacialis) is an endangered
species. These whales continuously suffer from deadly vessel impacts alongside
the eastern coast of North America. There have been countless efforts to save
the remaining 350 - 400 of them. One of the most prominent works is done by
Marinexplore and Cornell University. A system of hydrophones linked to
satellite connected-buoys has been deployed in the whales habitat. These
hydrophones record and transmit live sounds to a base station. These recording
might contain the right whale contact call as well as many other noises. The
noise rate increases rapidly in vessel-busy areas such as by the Boston harbor.
This paper presents and studies the problem of detecting the North Atlantic
right whale contact call with the presence of noise and other marine life
sounds. A novel algorithm was developed to preprocess the sound waves before a
tree based hierarchical classifier is used to classify the data and provide a
score. The developed model was trained with 30,000 data points made available
through the Cornell University Whale Detection Challenge program. Results
showed that the developed algorithm had close to 85% success rate in detecting
the presence of the North Atlantic right whale. | computer science |
40,535 | Dynamic Ad Allocation: Bandits with Budgets | cs.LG | We consider an application of multi-armed bandits to internet advertising
(specifically, to dynamic ad allocation in the pay-per-click model, with
uncertainty on the click probabilities). We focus on an important practical
issue that advertisers are constrained in how much money they can spend on
their ad campaigns. This issue has not been considered in the prior work on
bandit-based approaches for ad allocation, to the best of our knowledge.
We define a simple, stylized model where an algorithm picks one ad to display
in each round, and each ad has a \emph{budget}: the maximal amount of money
that can be spent on this ad. This model admits a natural variant of UCB1, a
well-known algorithm for multi-armed bandits with stochastic rewards. We derive
strong provable guarantees for this algorithm. | computer science |
40,536 | KERT: Automatic Extraction and Ranking of Topical Keyphrases from
Content-Representative Document Titles | cs.LG | We introduce KERT (Keyphrase Extraction and Ranking by Topic), a framework
for topical keyphrase generation and ranking. By shifting from the
unigram-centric traditional methods of unsupervised keyphrase extraction to a
phrase-centric approach, we are able to directly compare and rank phrases of
different lengths. We construct a topical keyphrase ranking function which
implements the four criteria that represent high quality topical keyphrases
(coverage, purity, phraseness, and completeness). The effectiveness of our
approach is demonstrated on two collections of content-representative titles in
the domains of Computer Science and Physics. | computer science |
40,537 | Identifying Pairs in Simulated Bio-Medical Time-Series | cs.LG | The paper presents a time-series-based classification approach to identify
similarities in pairs of simulated human-generated patterns. An example for a
pattern is a time-series representing a heart rate during a specific
time-range, wherein the time-series is a sequence of data points that represent
the changes in the heart rate values. A bio-medical simulator system was
developed to acquire a collection of 7,871 price patterns of financial
instruments. The financial instruments traded in real-time on three American
stock exchanges, NASDAQ, NYSE, and AMEX, simulate bio-medical measurements. The
system simulates a human in which each price pattern represents one bio-medical
sensor. Data provided during trading hours from the stock exchanges allowed
real-time classification. Classification is based on new machine learning
techniques: self-labeling, which allows the application of supervised learning
methods on unlabeled time-series and similarity ranking, which applied on a
decision tree learning algorithm to classify time-series regardless of type and
quantity. | computer science |
40,538 | Highly Scalable, Parallel and Distributed AdaBoost Algorithm using Light
Weight Threads and Web Services on a Network of Multi-Core Machines | cs.DC | AdaBoost is an important algorithm in machine learning and is being widely
used in object detection. AdaBoost works by iteratively selecting the best
amongst weak classifiers, and then combines several weak classifiers to obtain
a strong classifier. Even though AdaBoost has proven to be very effective, its
learning execution time can be quite large depending upon the application e.g.,
in face detection, the learning time can be several days. Due to its increasing
use in computer vision applications, the learning time needs to be drastically
reduced so that an adaptive near real time object detection system can be
incorporated. In this paper, we develop a hybrid parallel and distributed
AdaBoost algorithm that exploits the multiple cores in a CPU via light weight
threads, and also uses multiple machines via a web service software
architecture to achieve high scalability. We present a novel hierarchical web
services based distributed architecture and achieve nearly linear speedup up to
the number of processors available to us. In comparison with the previously
published work, which used a single level master-slave parallel and distributed
implementation [1] and only achieved a speedup of 2.66 on four nodes, we
achieve a speedup of 95.1 on 31 workstations each having a quad-core processor,
resulting in a learning time of only 4.8 seconds per feature. | computer science |
40,539 | Predicting Risk-of-Readmission for Congestive Heart Failure Patients: A
Multi-Layer Approach | cs.LG | Mitigating risk-of-readmission of Congestive Heart Failure (CHF) patients
within 30 days of discharge is important because such readmissions are not only
expensive but also critical indicator of provider care and quality of
treatment. Accurately predicting the risk-of-readmission may allow hospitals to
identify high-risk patients and eventually improve quality of care by
identifying factors that contribute to such readmissions in many scenarios. In
this paper, we investigate the problem of predicting risk-of-readmission as a
supervised learning problem, using a multi-layer classification approach.
Earlier contributions inadequately attempted to assess a risk value for 30 day
readmission by building a direct predictive model as opposed to our approach.
We first split the problem into various stages, (a) at risk in general (b) risk
within 60 days (c) risk within 30 days, and then build suitable classifiers for
each stage, thereby increasing the ability to accurately predict the risk using
multiple layers of decision. The advantage of our approach is that we can use
different classification models for the subtasks that are more suited for the
respective problems. Moreover, each of the subtasks can be solved using
different features and training data leading to a highly confident diagnosis or
risk compared to a one-shot single layer approach. An experimental evaluation
on actual hospital patient record data from Multicare Health Systems shows that
our model is significantly better at predicting risk-of-readmission of CHF
patients within 30 days after discharge compared to prior attempts. | computer science |
40,540 | A Novel Approach for Single Gene Selection Using Clustering and
Dimensionality Reduction | cs.CE | We extend the standard rough set-based approach to deal with huge amounts of
numeric attributes versus small amount of available objects. Here, a novel
approach of clustering along with dimensionality reduction; Hybrid Fuzzy C
Means-Quick Reduct (FCMQR) algorithm is proposed for single gene selection.
Gene selection is a process to select genes which are more informative. It is
one of the important steps in knowledge discovery. The problem is that all
genes are not important in gene expression data. Some of the genes may be
redundant, and others may be irrelevant and noisy. In this study, the entire
dataset is divided in proper grouping of similar genes by applying Fuzzy C
Means (FCM) algorithm. A high class discriminated genes has been selected based
on their degree of dependence by applying Quick Reduct algorithm based on Rough
Set Theory to all the resultant clusters. Average Correlation Value (ACV) is
calculated for the high class discriminated genes. The clusters which have the
ACV value a s 1 is determined as significant clusters, whose classification
accuracy will be equal or high when comparing to the accuracy of the entire
dataset. The proposed algorithm is evaluated using WEKA classifiers and
compared. Finally, experimental results related to the leukemia cancer data
confirm that our approach is quite promising, though it surely requires further
research. | computer science |
40,541 | Large Margin Low Rank Tensor Analysis | cs.LG | Other than vector representations, the direct objects of human cognition are
generally high-order tensors, such as 2D images and 3D textures. From this
fact, two interesting questions naturally arise: How does the human brain
represent these tensor perceptions in a "manifold" way, and how can they be
recognized on the "manifold"? In this paper, we present a supervised model to
learn the intrinsic structure of the tensors embedded in a high dimensional
Euclidean space. With the fixed point continuation procedures, our model
automatically and jointly discovers the optimal dimensionality and the
representations of the low dimensional embeddings. This makes it an effective
simulation of the cognitive process of human brain. Furthermore, the
generalization of our model based on similarity between the learned low
dimensional embeddings can be viewed as counterpart of recognition of human
brain. Experiments on applications for object recognition and face recognition
demonstrate the superiority of our proposed model over state-of-the-art
approaches. | computer science |
40,542 | R3MC: A Riemannian three-factor algorithm for low-rank matrix completion | math.OC | We exploit the versatile framework of Riemannian optimization on quotient
manifolds to develop R3MC, a nonlinear conjugate-gradient method for low-rank
matrix completion. The underlying search space of fixed-rank matrices is
endowed with a novel Riemannian metric that is tailored to the least-squares
cost. Numerical comparisons suggest that R3MC robustly outperforms
state-of-the-art algorithms across different problem instances, especially
those that combine scarcely sampled and ill-conditioned data. | computer science |
40,543 | Approximation Algorithms for Bayesian Multi-Armed Bandit Problems | cs.DS | In this paper, we consider several finite-horizon Bayesian multi-armed bandit
problems with side constraints which are computationally intractable (NP-Hard)
and for which no optimal (or near optimal) algorithms are known to exist with
sub-exponential running time. All of these problems violate the standard
exchange property, which assumes that the reward from the play of an arm is not
contingent upon when the arm is played. Not only are index policies suboptimal
in these contexts, there has been little analysis of such policies in these
problem settings. We show that if we consider near-optimal policies, in the
sense of approximation algorithms, then there exists (near) index policies.
Conceptually, if we can find policies that satisfy an approximate version of
the exchange property, namely, that the reward from the play of an arm depends
on when the arm is played to within a constant factor, then we have an avenue
towards solving these problems. However such an approximate version of the
idling bandit property does not hold on a per-play basis and are shown to hold
in a global sense. Clearly, such a property is not necessarily true of
arbitrary single arm policies and finding such single arm policies is
nontrivial. We show that by restricting the state spaces of arms we can find
single arm policies and that these single arm policies can be combined into
global (near) index policies where the approximate version of the exchange
property is true in expectation. The number of different bandit problems that
can be addressed by this technique already demonstrate its wide applicability. | computer science |
40,544 | Online Alternating Direction Method (longer version) | cs.LG | Online optimization has emerged as powerful tool in large scale optimization.
In this pa- per, we introduce efficient online optimization algorithms based on
the alternating direction method (ADM), which can solve online convex
optimization under linear constraints where the objective could be non-smooth.
We introduce new proof techniques for ADM in the batch setting, which yields a
O(1/T) convergence rate for ADM and forms the basis for regret anal- ysis in
the online setting. We consider two scenarios in the online setting, based on
whether an additional Bregman divergence is needed or not. In both settings, we
establish regret bounds for both the objective function as well as constraints
violation for general and strongly convex functions. We also consider inexact
ADM updates where certain terms are linearized to yield efficient updates and
show the stochastic convergence rates. In addition, we briefly discuss that
online ADM can be used as projection- free online learning algorithm in some
scenarios. Preliminary results are presented to illustrate the performance of
the proposed algorithms. | computer science |
40,545 | Cluster coloring of the Self-Organizing Map: An information
visualization perspective | cs.LG | This paper takes an information visualization perspective to visual
representations in the general SOM paradigm. This involves viewing SOM-based
visualizations through the eyes of Bertin's and Tufte's theories on data
graphics. The regular grid shape of the Self-Organizing Map (SOM), while being
a virtue for linking visualizations to it, restricts representation of cluster
structures. From the viewpoint of information visualization, this paper
provides a general, yet simple, solution to projection-based coloring of the
SOM that reveals structures. First, the proposed color space is easy to
construct and customize to the purpose of use, while aiming at being
perceptually correct and informative through two separable dimensions. Second,
the coloring method is not dependent on any specific method of projection, but
is rather modular to fit any objective function suitable for the task at hand.
The cluster coloring is illustrated on two datasets: the iris data, and welfare
and poverty indicators. | computer science |
40,546 | Parallel Coordinate Descent Newton Method for Efficient
$\ell_1$-Regularized Minimization | cs.LG | The recent years have witnessed advances in parallel algorithms for large
scale optimization problems. Notwithstanding demonstrated success, existing
algorithms that parallelize over features are usually limited by divergence
issues under high parallelism or require data preprocessing to alleviate these
problems. In this work, we propose a Parallel Coordinate Descent Newton
algorithm using multidimensional approximate Newton steps (PCDN), where the
off-diagonal elements of the Hessian are set to zero to enable parallelization.
It randomly partitions the feature set into $b$ bundles/subsets with size of
$P$, and sequentially processes each bundle by first computing the descent
directions for each feature in parallel and then conducting $P$-dimensional
line search to obtain the step size. We show that: (1) PCDN is guaranteed to
converge globally despite increasing parallelism; (2) PCDN converges to the
specified accuracy $\epsilon$ within the limited iteration number of
$T_\epsilon$, and $T_\epsilon$ decreases with increasing parallelism (bundle
size $P$). Using the implementation technique of maintaining intermediate
quantities, we minimize the data transfer and synchronization cost of the
$P$-dimensional line search. For concreteness, the proposed PCDN algorithm is
applied to $\ell_1$-regularized logistic regression and $\ell_2$-loss SVM.
Experimental evaluations on six benchmark datasets show that the proposed PCDN
algorithm exploits parallelism well and outperforms the state-of-the-art
methods in speed without losing accuracy. | computer science |
40,547 | A Fuzzy Based Approach to Text Mining and Document Clustering | cs.LG | Fuzzy logic deals with degrees of truth. In this paper, we have shown how to
apply fuzzy logic in text mining in order to perform document clustering. We
took an example of document clustering where the documents had to be clustered
into two categories. The method involved cleaning up the text and stemming of
words. Then, we chose m number of features which differ significantly in their
word frequencies (WF), normalized by document length, between documents
belonging to these two clusters. The documents to be clustered were represented
as a collection of m normalized WF values. Fuzzy c-means (FCM) algorithm was
used to cluster these documents into two clusters. After the FCM execution
finished, the documents in the two clusters were analysed for the values of
their respective m features. It was known that documents belonging to a
document type, say X, tend to have higher WF values for some particular
features. If the documents belonging to a cluster had higher WF values for
those same features, then that cluster was said to represent X. By fuzzy logic,
we not only get the cluster name, but also the degree to which a document
belongs to a cluster. | computer science |
40,548 | From-Below Approximations in Boolean Matrix Factorization: Geometry and
New Algorithm | cs.NA | We present new results on Boolean matrix factorization and a new algorithm
based on these results. The results emphasize the significance of
factorizations that provide from-below approximations of the input matrix.
While the previously proposed algorithms do not consider the possibly different
significance of different matrix entries, our results help measure such
significance and suggest where to focus when computing factors. An experimental
evaluation of the new algorithm on both synthetic and real data demonstrates
its good performance in terms of good coverage by the first k factors as well
as a small number of factors needed for exact decomposition and indicates that
the algorithm outperforms the available ones in these terms. We also propose
future research topics. | computer science |
40,549 | An efficient reduction of ranking to classification | cs.LG | This paper describes an efficient reduction of the learning problem of
ranking to binary classification. The reduction guarantees an average pairwise
misranking regret of at most that of the binary classifier regret, improving a
recent result of Balcan et al which only guarantees a factor of 2. Moreover,
our reduction applies to a broader class of ranking loss functions, admits a
simpler proof, and the expected running time complexity of our algorithm in
terms of number of calls to a classifier or preference function is improved
from $\Omega(n^2)$ to $O(n \log n)$. In addition, when the top $k$ ranked
elements only are required ($k \ll n$), as in many applications in information
extraction or search engines, the time complexity of our algorithm can be
further reduced to $O(k \log k + n)$. Our reduction and algorithm are thus
practical for realistic applications where the number of points to rank exceeds
several thousands. Much of our results also extend beyond the bipartite case
previously studied.
Our rediction is a randomized one. To complement our result, we also derive
lower bounds on any deterministic reduction from binary (preference)
classification to ranking, implying that our use of a randomized reduction is
essentially necessary for the guarantees we provide. | computer science |
40,550 | Bias-Variance Techniques for Monte Carlo Optimization: Cross-validation
for the CE Method | cs.NA | In this paper, we examine the CE method in the broad context of Monte Carlo
Optimization (MCO) and Parametric Learning (PL), a type of machine learning. A
well-known overarching principle used to improve the performance of many PL
algorithms is the bias-variance tradeoff. This tradeoff has been used to
improve PL algorithms ranging from Monte Carlo estimation of integrals, to
linear estimation, to general statistical estimation. Moreover, as described
by, MCO is very closely related to PL. Owing to this similarity, the
bias-variance tradeoff affects MCO performance, just as it does PL performance.
In this article, we exploit the bias-variance tradeoff to enhance the
performance of MCO algorithms. We use the technique of cross-validation, a
technique based on the bias-variance tradeoff, to significantly improve the
performance of the Cross Entropy (CE) method, which is an MCO algorithm. In
previous work we have confirmed that other PL techniques improve the perfomance
of other MCO algorithms. We conclude that the many techniques pioneered in PL
could be investigated as ways to improve MCO algorithms in general, and the CE
method in particular. | computer science |
40,551 | Blind Cognitive MAC Protocols | cs.NI | We consider the design of cognitive Medium Access Control (MAC) protocols
enabling an unlicensed (secondary) transmitter-receiver pair to communicate
over the idle periods of a set of licensed channels, i.e., the primary network.
The objective is to maximize data throughput while maintaining the
synchronization between secondary users and avoiding interference with licensed
(primary) users. No statistical information about the primary traffic is
assumed to be available a-priori to the secondary user. We investigate two
distinct sensing scenarios. In the first, the secondary transmitter is capable
of sensing all the primary channels, whereas it senses one channel only in the
second scenario. In both cases, we propose MAC protocols that efficiently learn
the statistics of the primary traffic online. Our simulation results
demonstrate that the proposed blind protocols asymptotically achieve the
throughput obtained when prior knowledge of primary traffic statistics is
available. | computer science |
40,552 | A Simple Linear Ranking Algorithm Using Query Dependent Intercept
Variables | cs.IR | The LETOR website contains three information retrieval datasets used as a
benchmark for testing machine learning ideas for ranking. Algorithms
participating in the challenge are required to assign score values to search
results for a collection of queries, and are measured using standard IR ranking
measures (NDCG, precision, MAP) that depend only the relative score-induced
order of the results. Similarly to many of the ideas proposed in the
participating algorithms, we train a linear classifier. In contrast with other
participating algorithms, we define an additional free variable (intercept, or
benchmark) for each query. This allows expressing the fact that results for
different queries are incomparable for the purpose of determining relevance.
The cost of this idea is the addition of relatively few nuisance parameters.
Our approach is simple, and we used a standard logistic regression library to
test it. The results beat the reported participating algorithms. Hence, it
seems promising to combine our approach with other more complex ideas. | computer science |
40,553 | Median topographic maps for biomedical data sets | cs.LG | Median clustering extends popular neural data analysis methods such as the
self-organizing map or neural gas to general data structures given by a
dissimilarity matrix only. This offers flexible and robust global data
inspection methods which are particularly suited for a variety of data as
occurs in biomedical domains. In this chapter, we give an overview about median
clustering and its properties and extensions, with a particular focus on
efficient implementations adapted to large scale data analysis. | computer science |
40,554 | Sailing the Information Ocean with Awareness of Currents: Discovery and
Application of Source Dependence | cs.DB | The Web has enabled the availability of a huge amount of useful information,
but has also eased the ability to spread false information and rumors across
multiple sources, making it hard to distinguish between what is true and what
is not. Recent examples include the premature Steve Jobs obituary, the second
bankruptcy of United airlines, the creation of Black Holes by the operation of
the Large Hadron Collider, etc. Since it is important to permit the expression
of dissenting and conflicting opinions, it would be a fallacy to try to ensure
that the Web provides only consistent information. However, to help in
separating the wheat from the chaff, it is essential to be able to determine
dependence between sources. Given the huge number of data sources and the vast
volume of conflicting data available on the Web, doing so in a scalable manner
is extremely challenging and has not been addressed by existing work yet.
In this paper, we present a set of research problems and propose some
preliminary solutions on the issues involved in discovering dependence between
sources. We also discuss how this knowledge can benefit a variety of
technologies, such as data integration and Web 2.0, that help users manage and
access the totality of the available information from various sources. | computer science |
40,555 | Distribution-Specific Agnostic Boosting | cs.LG | We consider the problem of boosting the accuracy of weak learning algorithms
in the agnostic learning framework of Haussler (1992) and Kearns et al. (1992).
Known algorithms for this problem (Ben-David et al., 2001; Gavinsky, 2002;
Kalai et al., 2008) follow the same strategy as boosting algorithms in the PAC
model: the weak learner is executed on the same target function but over
different distributions on the domain. We demonstrate boosting algorithms for
the agnostic learning framework that only modify the distribution on the labels
of the points (or, equivalently, modify the target function). This allows
boosting a distribution-specific weak agnostic learner to a strong agnostic
learner with respect to the same distribution.
When applied to the weak agnostic parity learning algorithm of Goldreich and
Levin (1989) our algorithm yields a simple PAC learning algorithm for DNF and
an agnostic learning algorithm for decision trees over the uniform distribution
using membership queries. These results substantially simplify Jackson's famous
DNF learning algorithm (1994) and the recent result of Gopalan et al. (2008).
We also strengthen the connection to hard-core set constructions discovered
by Klivans and Servedio (1999) by demonstrating that hard-core set
constructions that achieve the optimal hard-core set size (given by Holenstein
(2005) and Barak et al. (2009)) imply distribution-specific agnostic boosting
algorithms. Conversely, our boosting algorithm gives a simple hard-core set
construction with an (almost) optimal hard-core set size. | computer science |
40,556 | Bounding the Sensitivity of Polynomial Threshold Functions | cs.CC | We give the first non-trivial upper bounds on the average sensitivity and
noise sensitivity of polynomial threshold functions. More specifically, for a
Boolean function f on n variables equal to the sign of a real, multivariate
polynomial of total degree d we prove
1) The average sensitivity of f is at most O(n^{1-1/(4d+6)}) (we also give a
combinatorial proof of the bound O(n^{1-1/2^d}).
2) The noise sensitivity of f with noise rate \delta is at most
O(\delta^{1/(4d+6)}).
Previously, only bounds for the linear case were known. Along the way we show
new structural theorems about random restrictions of polynomial threshold
functions obtained via hypercontractivity. These structural results may be of
independent interest as they provide a generic template for transforming
problems related to polynomial threshold functions defined on the Boolean
hypercube to polynomial threshold functions defined in Gaussian space. | computer science |
40,557 | "Memory foam" approach to unsupervised learning | nlin.AO | We propose an alternative approach to construct an artificial learning
system, which naturally learns in an unsupervised manner. Its mathematical
prototype is a dynamical system, which automatically shapes its vector field in
response to the input signal. The vector field converges to a gradient of a
multi-dimensional probability density distribution of the input process, taken
with negative sign. The most probable patterns are represented by the stable
fixed points, whose basins of attraction are formed automatically. The
performance of this system is illustrated with musical signals. | computer science |
40,558 | Data Stability in Clustering: A Closer Look | cs.LG | We consider the model introduced by Bilu and Linial (2010), who study
problems for which the optimal clustering does not change when distances are
perturbed. They show that even when a problem is NP-hard, it is sometimes
possible to obtain efficient algorithms for instances resilient to certain
multiplicative perturbations, e.g. on the order of $O(\sqrt{n})$ for max-cut
clustering. Awasthi et al. (2010) consider center-based objectives, and Balcan
and Liang (2011) analyze the $k$-median and min-sum objectives, giving
efficient algorithms for instances resilient to certain constant multiplicative
perturbations.
Here, we are motivated by the question of to what extent these assumptions
can be relaxed while allowing for efficient algorithms. We show there is little
room to improve these results by giving NP-hardness lower bounds for both the
$k$-median and min-sum objectives. On the other hand, we show that constant
multiplicative resilience parameters can be so strong as to make the clustering
problem trivial, leaving only a narrow range of resilience parameters for which
clustering is interesting. We also consider a model of additive perturbations
and give a correspondence between additive and multiplicative notions of
stability. Our results provide a close examination of the consequences of
assuming stability in data. | computer science |
40,559 | Private Data Release via Learning Thresholds | cs.CC | This work considers computationally efficient privacy-preserving data
release. We study the task of analyzing a database containing sensitive
information about individual participants. Given a set of statistical queries
on the data, we want to release approximate answers to the queries while also
guaranteeing differential privacy---protecting each participant's sensitive
data.
Our focus is on computationally efficient data release algorithms; we seek
algorithms whose running time is polynomial, or at least sub-exponential, in
the data dimensionality. Our primary contribution is a computationally
efficient reduction from differentially private data release for a class of
counting queries, to learning thresholded sums of predicates from a related
class.
We instantiate this general reduction with a variety of algorithms for
learning thresholds. These instantiations yield several new results for
differentially private data release. As two examples, taking {0,1}^d to be the
data domain (of dimension d), we obtain differentially private algorithms for:
(*) Releasing all k-way conjunctions. For any given k, the resulting data
release algorithm has bounded error as long as the database is of size at least
d^{O(\sqrt{k\log(k\log d)})}. The running time is polynomial in the database
size.
(*) Releasing a (1-\gamma)-fraction of all parity queries. For any \gamma
\geq \poly(1/d), the algorithm has bounded error as long as the database is of
size at least \poly(d). The running time is polynomial in the database size.
Several other instantiations yield further results for privacy-preserving
data release. Of the two results highlighted above, the first learning
algorithm uses techniques for representing thresholded sums of predicates as
low-degree polynomial threshold functions. The second learning algorithm is
based on Jackson's Harmonic Sieve algorithm [Jackson 1997]. | computer science |
40,560 | Optimal Adaptive Learning in Uncontrolled Restless Bandit Problems | math.OC | In this paper we consider the problem of learning the optimal policy for
uncontrolled restless bandit problems. In an uncontrolled restless bandit
problem, there is a finite set of arms, each of which when pulled yields a
positive reward. There is a player who sequentially selects one of the arms at
each time step. The goal of the player is to maximize its undiscounted reward
over a time horizon T. The reward process of each arm is a finite state Markov
chain, whose transition probabilities are unknown by the player. State
transitions of each arm is independent of the selection of the player. We
propose a learning algorithm with logarithmic regret uniformly over time with
respect to the optimal finite horizon policy. Our results extend the optimal
adaptive learning of MDPs to POMDPs. | computer science |
40,561 | Performance and Convergence of Multi-user Online Learning | cs.MA | We study the problem of allocating multiple users to a set of wireless
channels in a decentralized manner when the channel quali- ties are
time-varying and unknown to the users, and accessing the same channel by
multiple users leads to reduced quality due to interference. In such a setting
the users not only need to learn the inherent channel quality and at the same
time the best allocations of users to channels so as to maximize the social
welfare. Assuming that the users adopt a certain online learning algorithm, we
investigate under what conditions the socially optimal allocation is
achievable. In particular we examine the effect of different levels of
knowledge the users may have and the amount of communications and cooperation.
The general conclusion is that when the cooperation of users decreases and the
uncertainty about channel payoffs increases it becomes harder to achieve the
socially opti- mal allocation. | computer science |
40,562 | Using Incomplete Information for Complete Weight Annotation of Road
Networks -- Extended Version | cs.LG | We are witnessing increasing interests in the effective use of road networks.
For example, to enable effective vehicle routing, weighted-graph models of
transportation networks are used, where the weight of an edge captures some
cost associated with traversing the edge, e.g., greenhouse gas (GHG) emissions
or travel time. It is a precondition to using a graph model for routing that
all edges have weights. Weights that capture travel times and GHG emissions can
be extracted from GPS trajectory data collected from the network. However, GPS
trajectory data typically lack the coverage needed to assign weights to all
edges. This paper formulates and addresses the problem of annotating all edges
in a road network with travel cost based weights from a set of trips in the
network that cover only a small fraction of the edges, each with an associated
ground-truth travel cost. A general framework is proposed to solve the problem.
Specifically, the problem is modeled as a regression problem and solved by
minimizing a judiciously designed objective function that takes into account
the topology of the road network. In particular, the use of weighted PageRank
values of edges is explored for assigning appropriate weights to all edges, and
the property of directional adjacency of edges is also taken into account to
assign weights. Empirical studies with weights capturing travel time and GHG
emissions on two road networks (Skagen, Denmark, and North Jutland, Denmark)
offer insight into the design properties of the proposed techniques and offer
evidence that the techniques are effective. | computer science |
40,563 | MonoStream: A Minimal-Hardware High Accuracy Device-free WLAN
Localization System | cs.NI | Device-free (DF) localization is an emerging technology that allows the
detection and tracking of entities that do not carry any devices nor
participate actively in the localization process. Typically, DF systems require
a large number of transmitters and receivers to achieve acceptable accuracy,
which is not available in many scenarios such as homes and small businesses. In
this paper, we introduce MonoStream as an accurate single-stream DF
localization system that leverages the rich Channel State Information (CSI) as
well as MIMO information from the physical layer to provide accurate DF
localization with only one stream. To boost its accuracy and attain low
computational requirements, MonoStream models the DF localization problem as an
object recognition problem and uses a novel set of CSI-context features and
techniques with proven accuracy and efficiency. Experimental evaluation in two
typical testbeds, with a side-by-side comparison with the state-of-the-art,
shows that MonoStream can achieve an accuracy of 0.95m with at least 26%
enhancement in median distance error using a single stream only. This
enhancement in accuracy comes with an efficient execution of less than 23ms per
location update on a typical laptop. This highlights the potential of
MonoStream usage for real-time DF tracking applications. | computer science |
40,564 | Theoretical Issues for Global Cumulative Treatment Analysis (GCTA) | stat.AP | Adaptive trials are now mainstream science. Recently, researchers have taken
the adaptive trial concept to its natural conclusion, proposing what we call
"Global Cumulative Treatment Analysis" (GCTA). Similar to the adaptive trial,
decision making and data collection and analysis in the GCTA are continuous and
integrated, and treatments are ranked in accord with the statistics of this
information, combined with what offers the most information gain. Where GCTA
differs from an adaptive trial, or, for that matter, from any trial design, is
that all patients are implicitly participants in the GCTA process, regardless
of whether they are formally enrolled in a trial. This paper discusses some of
the theoretical and practical issues that arise in the design of a GCTA, along
with some preliminary thoughts on how they might be approached. | computer science |
40,565 | OFF-Set: One-pass Factorization of Feature Sets for Online
Recommendation in Persistent Cold Start Settings | cs.LG | One of the most challenging recommendation tasks is recommending to a new,
previously unseen user. This is known as the 'user cold start' problem.
Assuming certain features or attributes of users are known, one approach for
handling new users is to initially model them based on their features.
Motivated by an ad targeting application, this paper describes an extreme
online recommendation setting where the cold start problem is perpetual. Every
user is encountered by the system just once, receives a recommendation, and
either consumes or ignores it, registering a binary reward.
We introduce One-pass Factorization of Feature Sets, OFF-Set, a novel
recommendation algorithm based on Latent Factor analysis, which models users by
mapping their features to a latent space. Furthermore, OFF-Set is able to model
non-linear interactions between pairs of features. OFF-Set is designed for
purely online recommendation, performing lightweight updates of its model per
each recommendation-reward observation. We evaluate OFF-Set against several
state of the art baselines, and demonstrate its superiority on real
ad-targeting data. | computer science |
40,566 | Normalized Google Distance of Multisets with Applications | cs.IR | Normalized Google distance (NGD) is a relative semantic distance based on the
World Wide Web (or any other large electronic database, for instance Wikipedia)
and a search engine that returns aggregate page counts. The earlier NGD between
pairs of search terms (including phrases) is not sufficient for all
applications. We propose an NGD of finite multisets of search terms that is
better for many applications. This gives a relative semantics shared by a
multiset of search terms. We give applications and compare the results with
those obtained using the pairwise NGD. The derivation of NGD method is based on
Kolmogorov complexity. | computer science |
40,567 | Fast Stochastic Alternating Direction Method of Multipliers | cs.LG | In this paper, we propose a new stochastic alternating direction method of
multipliers (ADMM) algorithm, which incrementally approximates the full
gradient in the linearized ADMM formulation. Besides having a low per-iteration
complexity as existing stochastic ADMM algorithms, the proposed algorithm
improves the convergence rate on convex problems from $O(\frac 1 {\sqrt{T}})$
to $O(\frac 1 T)$, where $T$ is the number of iterations. This matches the
convergence rate of the batch ADMM algorithm, but without the need to visit all
the samples in each iteration. Experiments on the graph-guided fused lasso
demonstrate that the new algorithm is significantly faster than
state-of-the-art stochastic and batch ADMM algorithms. | computer science |
40,568 | Nested Nonnegative Cone Analysis | stat.ME | Motivated by the analysis of nonnegative data objects, a novel Nested
Nonnegative Cone Analysis (NNCA) approach is proposed to overcome some
drawbacks of existing methods. The application of traditional PCA/SVD method to
nonnegative data often cause the approximation matrix leave the nonnegative
cone, which leads to non-interpretable and sometimes nonsensical results. The
nonnegative matrix factorization (NMF) approach overcomes this issue, however
the NMF approximation matrices suffer several drawbacks: 1) the factorization
may not be unique, 2) the resulting approximation matrix at a specific rank may
not be unique, and 3) the subspaces spanned by the approximation matrices at
different ranks may not be nested. These drawbacks will cause troubles in
determining the number of components and in multi-scale (in ranks)
interpretability. The NNCA approach proposed in this paper naturally generates
a nested structure, and is shown to be unique at each rank. Simulations are
used in this paper to illustrate the drawbacks of the traditional methods, and
the usefulness of the NNCA method. | computer science |
40,569 | Decentralized Online Big Data Classification - a Bandit Framework | cs.LG | Distributed, online data mining systems have emerged as a result of
applications requiring analysis of large amounts of correlated and
high-dimensional data produced by multiple distributed data sources. We propose
a distributed online data classification framework where data is gathered by
distributed data sources and processed by a heterogeneous set of distributed
learners which learn online, at run-time, how to classify the different data
streams either by using their locally available classification functions or by
helping each other by classifying each other's data. Importantly, since the
data is gathered at different locations, sending the data to another learner to
process incurs additional costs such as delays, and hence this will be only
beneficial if the benefits obtained from a better classification will exceed
the costs. We assume that the classification functions available to each
processing element are fixed, but their prediction accuracy for various types
of incoming data are unknown and can change dynamically over time, and thus
they need to be learned online. We model the problem of joint classification by
the distributed and heterogeneous learners from multiple data sources as a
distributed contextual bandit problem where each data is characterized by a
specific context. We develop distributed online learning algorithms for which
we can prove that they have sublinear regret. Compared to prior work in
distributed online data mining, our work is the first to provide analytic
regret results characterizing the performance of the proposed algorithms. | computer science |
40,570 | Considering users' behaviours in improving the responses of an
information base | cs.LG | In this paper, our aim is to propose a model that helps in the efficient use
of an information system by users, within the organization represented by the
IS, in order to resolve their decisional problems. In other words we want to
aid the user within an organization in obtaining the information that
corresponds to his needs (informational needs that result from his decisional
problems). This type of information system is what we refer to as economic
intelligence system because of its support for economic intelligence processes
of the organisation. Our assumption is that every EI process begins with the
identification of the decisional problem which is translated into an
informational need. This need is then translated into one or many information
search problems (ISP). We also assumed that an ISP is expressed in terms of the
user's expectations and that these expectations determine the activities or the
behaviors of the user, when he/she uses an IS. The model we are proposing is
used for the conception of the IS so that the process of retrieving of
solution(s) or the responses given by the system to an ISP is based on these
behaviours and correspond to the needs of the user. | computer science |
40,571 | Using state space differential geometry for nonlinear blind source
separation | cs.LG | Given a time series of multicomponent measurements of an evolving stimulus,
nonlinear blind source separation (BSS) seeks to find a "source" time series,
comprised of statistically independent combinations of the measured components.
In this paper, we seek a source time series with local velocity cross
correlations that vanish everywhere in stimulus state space. However, in an
earlier paper the local velocity correlation matrix was shown to constitute a
metric on state space. Therefore, nonlinear BSS maps onto a problem of
differential geometry: given the metric observed in the measurement coordinate
system, find another coordinate system in which the metric is diagonal
everywhere. We show how to determine if the observed data are separable in this
way, and, if they are, we show how to construct the required transformation to
the source coordinate system, which is essentially unique except for an unknown
rotation that can be found by applying the methods of linear BSS. Thus, the
proposed technique solves nonlinear BSS in many situations or, at least,
reduces it to linear BSS, without the use of probabilistic, parametric, or
iterative procedures. This paper also describes a generalization of this
methodology that performs nonlinear independent subspace separation. In every
case, the resulting decomposition of the observed data is an intrinsic property
of the stimulus' evolution in the sense that it does not depend on the way the
observer chooses to view it (e.g., the choice of the observing machine's
sensors). In other words, the decomposition is a property of the evolution of
the "real" stimulus that is "out there" broadcasting energy to the observer.
The technique is illustrated with analytic and numerical examples. | computer science |
40,572 | Statistical Mechanics of On-line Learning when a Moving Teacher Goes
around an Unlearnable True Teacher | cs.LG | In the framework of on-line learning, a learning machine might move around a
teacher due to the differences in structures or output functions between the
teacher and the learning machine. In this paper we analyze the generalization
performance of a new student supervised by a moving machine. A model composed
of a fixed true teacher, a moving teacher, and a student is treated
theoretically using statistical mechanics, where the true teacher is a
nonmonotonic perceptron and the others are simple perceptrons. Calculating the
generalization errors numerically, we show that the generalization errors of a
student can temporarily become smaller than that of a moving teacher, even if
the student only uses examples from the moving teacher. However, the
generalization error of the student eventually becomes the same value with that
of the moving teacher. This behavior is qualitatively different from that of a
linear model. | computer science |
40,573 | Privacy Preserving ID3 over Horizontally, Vertically and Grid
Partitioned Data | cs.DB | We consider privacy preserving decision tree induction via ID3 in the case
where the training data is horizontally or vertically distributed. Furthermore,
we consider the same problem in the case where the data is both horizontally
and vertically distributed, a situation we refer to as grid partitioned data.
We give an algorithm for privacy preserving ID3 over horizontally partitioned
data involving more than two parties. For grid partitioned data, we discuss two
different evaluation methods for preserving privacy ID3, namely, first merging
horizontally and developing vertically or first merging vertically and next
developing horizontally. Next to introducing privacy preserving data mining
over grid-partitioned data, the main contribution of this paper is that we
show, by means of a complexity analysis that the former evaluation method is
the more efficient. | computer science |
40,574 | Approximation Algorithms for Bregman Co-clustering and Tensor Clustering | cs.DS | In the past few years powerful generalizations to the Euclidean k-means
problem have been made, such as Bregman clustering [7], co-clustering (i.e.,
simultaneous clustering of rows and columns of an input matrix) [9,18], and
tensor clustering [8,34]. Like k-means, these more general problems also suffer
from the NP-hardness of the associated optimization. Researchers have developed
approximation algorithms of varying degrees of sophistication for k-means,
k-medians, and more recently also for Bregman clustering [2]. However, there
seem to be no approximation algorithms for Bregman co- and tensor clustering.
In this paper we derive the first (to our knowledge) guaranteed methods for
these increasingly important clustering settings. Going beyond Bregman
divergences, we also prove an approximation factor for tensor clustering with
arbitrary separable metrics. Through extensive experiments we evaluate the
characteristics of our method, and show that it also has practical impact. | computer science |
40,575 | Decision trees are PAC-learnable from most product distributions: a
smoothed analysis | cs.LG | We consider the problem of PAC-learning decision trees, i.e., learning a
decision tree over the n-dimensional hypercube from independent random labeled
examples. Despite significant effort, no polynomial-time algorithm is known for
learning polynomial-sized decision trees (even trees of any super-constant
size), even when examples are assumed to be drawn from the uniform distribution
on {0,1}^n. We give an algorithm that learns arbitrary polynomial-sized
decision trees for {\em most product distributions}. In particular, consider a
random product distribution where the bias of each bit is chosen independently
and uniformly from, say, [.49,.51]. Then with high probability over the
parameters of the product distribution and the random examples drawn from it,
the algorithm will learn any tree. More generally, in the spirit of smoothed
analysis, we consider an arbitrary product distribution whose parameters are
specified only up to a [-c,c] accuracy (perturbation), for an arbitrarily small
positive constant c. | computer science |
40,576 | Uncovering protein interaction in abstracts and text using a novel
linear model and word proximity networks | cs.IR | We participated in three of the protein-protein interaction subtasks of the
Second BioCreative Challenge: classification of abstracts relevant for
protein-protein interaction (IAS), discovery of protein pairs (IPS) and text
passages characterizing protein interaction (ISS) in full text documents. We
approached the abstract classification task with a novel, lightweight linear
model inspired by spam-detection techniques, as well as an uncertainty-based
integration scheme. We also used a Support Vector Machine and the Singular
Value Decomposition on the same features for comparison purposes. Our approach
to the full text subtasks (protein pair and passage identification) includes a
feature expansion method based on word-proximity networks. Our approach to the
abstract classification task (IAS) was among the top submissions for this task
in terms of the measures of performance used in the challenge evaluation
(accuracy, F-score and AUC). We also report on a web-tool we produced using our
approach: the Protein Interaction Abstract Relevance Evaluator (PIARE). Our
approach to the full text tasks resulted in one of the highest recall rates as
well as mean reciprocal rank of correct passages. Our approach to abstract
classification shows that a simple linear model, using relatively few features,
is capable of generalizing and uncovering the conceptual nature of
protein-protein interaction from the bibliome. Since the novel approach is
based on a very lightweight linear model, it can be easily ported and applied
to similar problems. In full text problems, the expansion of word features with
word-proximity networks is shown to be useful, though the need for some
improvements is discussed. | computer science |
40,577 | Decomposition Principles and Online Learning in Cross-Layer Optimization
for Delay-Sensitive Applications | cs.MM | In this paper, we propose a general cross-layer optimization framework in
which we explicitly consider both the heterogeneous and dynamically changing
characteristics of delay-sensitive applications and the underlying time-varying
network conditions. We consider both the independently decodable data units
(DUs, e.g. packets) and the interdependent DUs whose dependencies are captured
by a directed acyclic graph (DAG). We first formulate the cross-layer design as
a non-linear constrained optimization problem by assuming complete knowledge of
the application characteristics and the underlying network conditions. The
constrained cross-layer optimization is decomposed into several cross-layer
optimization subproblems for each DU and two master problems. The proposed
decomposition method determines the necessary message exchanges between layers
for achieving the optimal cross-layer solution. However, the attributes (e.g.
distortion impact, delay deadline etc) of future DUs as well as the network
conditions are often unknown in the considered real-time applications. The
impact of current cross-layer actions on the future DUs can be characterized by
a state-value function in the Markov decision process (MDP) framework. Based on
the dynamic programming solution to the MDP, we develop a low-complexity
cross-layer optimization algorithm using online learning for each DU
transmission. This online algorithm can be implemented in real-time in order to
cope with unknown source characteristics, network dynamics and resource
constraints. Our numerical results demonstrate the efficiency of the proposed
online algorithm. | computer science |
40,578 | Comparison of Binary Classification Based on Signed Distance Functions
with Support Vector Machines | cs.LG | We investigate the performance of a simple signed distance function (SDF)
based method by direct comparison with standard SVM packages, as well as
K-nearest neighbor and RBFN methods. We present experimental results comparing
the SDF approach with other classifiers on both synthetic geometric problems
and five benchmark clinical microarray data sets. On both geometric problems
and microarray data sets, the non-optimized SDF based classifiers perform just
as well or slightly better than well-developed, standard SVM methods. These
results demonstrate the potential accuracy of SDF-based methods on some types
of problems. | computer science |
40,579 | Quantum Predictive Learning and Communication Complexity with Single
Input | cs.LG | We define a new model of quantum learning that we call Predictive Quantum
(PQ). This is a quantum analogue of PAC, where during the testing phase the
student is only required to answer a polynomial number of testing queries.
We demonstrate a relational concept class that is efficiently learnable in
PQ, while in any "reasonable" classical model exponential amount of training
data would be required. This is the first unconditional separation between
quantum and classical learning.
We show that our separation is the best possible in several ways; in
particular, there is no analogous result for a functional class, as well as for
several weaker versions of quantum learning. In order to demonstrate tightness
of our separation we consider a special case of one-way communication that we
call single-input mode, where Bob receives no input. Somewhat surprisingly,
this setting becomes nontrivial when relational communication tasks are
considered. In particular, any problem with two-sided input can be transformed
into a single-input relational problem of equal classical one-way cost. We show
that the situation is different in the quantum case, where the same
transformation can make the communication complexity exponentially larger. This
happens if and only if the original problem has exponential gap between quantum
and classical one-way communication costs. We believe that these auxiliary
results might be of independent interest. | computer science |
40,580 | A New Local Distance-Based Outlier Detection Approach for Scattered
Real-World Data | cs.LG | Detecting outliers which are grossly different from or inconsistent with the
remaining dataset is a major challenge in real-world KDD applications. Existing
outlier detection methods are ineffective on scattered real-world datasets due
to implicit data patterns and parameter setting issues. We define a novel
"Local Distance-based Outlier Factor" (LDOF) to measure the {outlier-ness} of
objects in scattered datasets which addresses these issues. LDOF uses the
relative location of an object to its neighbours to determine the degree to
which the object deviates from its neighbourhood. Properties of LDOF are
theoretically analysed including LDOF's lower bound and its false-detection
probability, as well as parameter settings. In order to facilitate parameter
settings in real-world applications, we employ a top-n technique in our outlier
detection approach, where only the objects with the highest LDOF values are
regarded as outliers. Compared to conventional approaches (such as top-n KNN
and top-n LOF), our method top-n LDOF is more effective at detecting outliers
in scattered data. It is also easier to set parameters, since its performance
is relatively stable over a large range of parameter values, as illustrated by
experimental results on both real-world and synthetic datasets. | computer science |
40,581 | Optimal Policies Search for Sensor Management | cs.LG | This paper introduces a new approach to solve sensor management problems.
Classically sensor management problems can be well formalized as
Partially-Observed Markov Decision Processes (POMPD). The original approach
developped here consists in deriving the optimal parameterized policy based on
a stochastic gradient estimation. We assume in this work that it is possible to
learn the optimal policy off-line (in simulation) using models of the
environement and of the sensor(s). The learned policy can then be used to
manage the sensor(s). In order to approximate the gradient in a stochastic
context, we introduce a new method to approximate the gradient, based on
Infinitesimal Perturbation Approximation (IPA). The effectiveness of this
general framework is illustrated by the managing of an Electronically Scanned
Array Radar. First simulations results are finally proposed. | computer science |
40,582 | Graph polynomials and approximation of partition functions with Loopy
Belief Propagation | cs.DM | The Bethe approximation, or loopy belief propagation algorithm is a
successful method for approximating partition functions of probabilistic models
associated with a graph. Chertkov and Chernyak derived an interesting formula
called Loop Series Expansion, which is an expansion of the partition function.
The main term of the series is the Bethe approximation while other terms are
labeled by subgraphs called generalized loops. In our recent paper, we derive
the loop series expansion in form of a polynomial with coefficients positive
integers, and extend the result to the expansion of marginals. In this paper,
we give more clear derivation for the results and discuss the properties of the
polynomial which is introduced in the paper. | computer science |
40,583 | Bayesian Forecasting of WWW Traffic on the Time Varying Poisson Model | cs.NI | Traffic forecasting from past observed traffic data with small calculation
complexity is one of important problems for planning of servers and networks.
Focusing on World Wide Web (WWW) traffic as fundamental investigation, this
paper would deal with Bayesian forecasting of network traffic on the time
varying Poisson model from a viewpoint from statistical decision theory. Under
this model, we would show that the estimated forecasting value is obtained by
simple arithmetic calculation and expresses real WWW traffic well from both
theoretical and empirical points of view. | computer science |
40,584 | Rough Set Model for Discovering Hybrid Association Rules | cs.DB | In this paper, the mining of hybrid association rules with rough set approach
is investigated as the algorithm RSHAR.The RSHAR algorithm is constituted of
two steps mainly. At first, to join the participant tables into a general table
to generate the rules which is expressing the relationship between two or more
domains that belong to several different tables in a database. Then we apply
the mapping code on selected dimension, which can be added directly into the
information system as one certain attribute. To find the association rules,
frequent itemsets are generated in second step where candidate itemsets are
generated through equivalence classes and also transforming the mapping code in
to real dimensions. The searching method for candidate itemset is similar to
apriori algorithm. The analysis of the performance of algorithm has been
carried out. | computer science |
40,585 | Learning with Spectral Kernels and Heavy-Tailed Data | cs.LG | Two ubiquitous aspects of large-scale data analysis are that the data often
have heavy-tailed properties and that diffusion-based or spectral-based methods
are often used to identify and extract structure of interest. Perhaps
surprisingly, popular distribution-independent methods such as those based on
the VC dimension fail to provide nontrivial results for even simple learning
problems such as binary classification in these two settings. In this paper, we
develop distribution-dependent learning methods that can be used to provide
dimension-independent sample complexity bounds for the binary classification
problem in these two popular settings. In particular, we provide bounds on the
sample complexity of maximum margin classifiers when the magnitude of the
entries in the feature vector decays according to a power law and also when
learning is performed with the so-called Diffusion Maps kernel. Both of these
results rely on bounding the annealed entropy of gap-tolerant classifiers in a
Hilbert space. We provide such a bound, and we demonstrate that our proof
technique generalizes to the case when the margin is measured with respect to
more general Banach space norms. The latter result is of potential interest in
cases where modeling the relationship between data elements as a dot product in
a Hilbert space is too restrictive. | computer science |
40,586 | Statistical Analysis of Privacy and Anonymity Guarantees in Randomized
Security Protocol Implementations | cs.CR | Security protocols often use randomization to achieve probabilistic
non-determinism. This non-determinism, in turn, is used in obfuscating the
dependence of observable values on secret data. Since the correctness of
security protocols is very important, formal analysis of security protocols has
been widely studied in literature. Randomized security protocols have also been
analyzed using formal techniques such as process-calculi and probabilistic
model checking. In this paper, we consider the problem of validating
implementations of randomized protocols. Unlike previous approaches which treat
the protocol as a white-box, our approach tries to verify an implementation
provided as a black box. Our goal is to infer the secrecy guarantees provided
by a security protocol through statistical techniques. We learn the
probabilistic dependency of the observable outputs on secret inputs using
Bayesian network. This is then used to approximate the leakage of secret. In
order to evaluate the accuracy of our statistical approach, we compare our
technique with the probabilistic model checking technique on two examples:
crowds protocol and dining crypotgrapher's protocol. | computer science |
40,587 | Online Reinforcement Learning for Dynamic Multimedia Systems | cs.LG | In our previous work, we proposed a systematic cross-layer framework for
dynamic multimedia systems, which allows each layer to make autonomous and
foresighted decisions that maximize the system's long-term performance, while
meeting the application's real-time delay constraints. The proposed solution
solved the cross-layer optimization offline, under the assumption that the
multimedia system's probabilistic dynamics were known a priori. In practice,
however, these dynamics are unknown a priori and therefore must be learned
online. In this paper, we address this problem by allowing the multimedia
system layers to learn, through repeated interactions with each other, to
autonomously optimize the system's long-term performance at run-time. We
propose two reinforcement learning algorithms for optimizing the system under
different design constraints: the first algorithm solves the cross-layer
optimization in a centralized manner, and the second solves it in a
decentralized manner. We analyze both algorithms in terms of their required
computation, memory, and inter-layer communication overheads. After noting that
the proposed reinforcement learning algorithms learn too slowly, we introduce a
complementary accelerated learning algorithm that exploits partial knowledge
about the system's dynamics in order to dramatically improve the system's
performance. In our experiments, we demonstrate that decentralized learning can
perform as well as centralized learning, while enabling the layers to act
autonomously. Additionally, we show that existing application-independent
reinforcement learning algorithms, and existing myopic learning algorithms
deployed in multimedia systems, perform significantly worse than our proposed
application-aware and foresighted learning methods. | computer science |
40,588 | Learning Gaussian Mixtures with Arbitrary Separation | cs.LG | In this paper we present a method for learning the parameters of a mixture of
$k$ identical spherical Gaussians in $n$-dimensional space with an arbitrarily
small separation between the components. Our algorithm is polynomial in all
parameters other than $k$. The algorithm is based on an appropriate grid search
over the space of parameters. The theoretical analysis of the algorithm hinges
on a reduction of the problem to 1 dimension and showing that two 1-dimensional
mixtures whose densities are close in the $L^2$ norm must have similar means
and mixing coefficients. To produce such a lower bound for the $L^2$ norm in
terms of the distances between the corresponding means, we analyze the behavior
of the Fourier transform of a mixture of Gaussians in 1 dimension around the
origin, which turns out to be closely related to the properties of the
Vandermonde matrix obtained from the component means. Analysis of this matrix
together with basic function approximation results allows us to provide a lower
bound for the norm of the mixture in the Fourier domain.
In recent years much research has been aimed at understanding the
computational aspects of learning parameters of Gaussians mixture distributions
in high dimension. To the best of our knowledge all existing work on learning
parameters of Gaussian mixtures assumes minimum separation between components
of the mixture which is an increasing function of either the dimension of the
space $n$ or the number of components $k$. In our paper we prove the first
result showing that parameters of a $n$-dimensional Gaussian mixture model with
arbitrarily small component separation can be learned in time polynomial in
$n$. | computer science |
40,589 | Learning Equilibria in Games by Stochastic Distributed Algorithms | cs.GT | We consider a class of fully stochastic and fully distributed algorithms,
that we prove to learn equilibria in games.
Indeed, we consider a family of stochastic distributed dynamics that we prove
to converge weakly (in the sense of weak convergence for probabilistic
processes) towards their mean-field limit, i.e an ordinary differential
equation (ODE) in the general case. We focus then on a class of stochastic
dynamics where this ODE turns out to be related to multipopulation replicator
dynamics.
Using facts known about convergence of this ODE, we discuss the convergence
of the initial stochastic dynamics: For general games, there might be
non-convergence, but when convergence of the ODE holds, considered stochastic
algorithms converge towards Nash equilibria. For games admitting Lyapunov
functions, that we call Lyapunov games, the stochastic dynamics converge. We
prove that any ordinal potential game, and hence any potential game is a
Lyapunov game, with a multiaffine Lyapunov function. For Lyapunov games with a
multiaffine Lyapunov function, we prove that this Lyapunov function is a
super-martingale over the stochastic dynamics. This leads a way to provide
bounds on their time of convergence by martingale arguments. This applies in
particular for many classes of games that have been considered in literature,
including several load balancing game scenarios and congestion games. | computer science |
40,590 | Network-aware Adaptation with Real-Time Channel Statistics for Wireless
LAN Multimedia Transmissions in the Digital Home | cs.NI | This paper suggests the use of intelligent network-aware processing agents in
wireless local area network drivers to generate metrics for bandwidth
estimation based on real-time channel statistics to enable wireless multimedia
application adaptation. Various configurations in the wireless digital home are
studied and the experimental results with performance variations are presented. | computer science |
40,591 | Contextual Bandits with Similarity Information | cs.DS | In a multi-armed bandit (MAB) problem, an online algorithm makes a sequence
of choices. In each round it chooses from a time-invariant set of alternatives
and receives the payoff associated with this alternative. While the case of
small strategy sets is by now well-understood, a lot of recent work has focused
on MAB problems with exponentially or infinitely large strategy sets, where one
needs to assume extra structure in order to make the problem tractable. In
particular, recent literature considered information on similarity between
arms.
We consider similarity information in the setting of "contextual bandits", a
natural extension of the basic MAB problem where before each round an algorithm
is given the "context" -- a hint about the payoffs in this round. Contextual
bandits are directly motivated by placing advertisements on webpages, one of
the crucial problems in sponsored search. A particularly simple way to
represent similarity information in the contextual bandit setting is via a
"similarity distance" between the context-arm pairs which gives an upper bound
on the difference between the respective expected payoffs.
Prior work on contextual bandits with similarity uses "uniform" partitions of
the similarity space, which is potentially wasteful. We design more efficient
algorithms that are based on adaptive partitions adjusted to "popular" context
and "high-payoff" arms. | computer science |
40,592 | Prediction of Zoonosis Incidence in Human using Seasonal Auto Regressive
Integrated Moving Average (SARIMA) | cs.LG | Zoonosis refers to the transmission of infectious diseases from animal to
human. The increasing number of zoonosis incidence makes the great losses to
lives, including humans and animals, and also the impact in social economic. It
motivates development of a system that can predict the future number of
zoonosis occurrences in human. This paper analyses and presents the use of
Seasonal Autoregressive Integrated Moving Average (SARIMA) method for
developing a forecasting model that able to support and provide prediction
number of zoonosis human incidence. The dataset for model development was
collected on a time series data of human tuberculosis occurrences in United
States which comprises of fourteen years of monthly data obtained from a study
published by Centers for Disease Control and Prevention (CDC). Several trial
models of SARIMA were compared to obtain the most appropriate model. Then,
diagnostic tests were used to determine model validity. The result showed that
the SARIMA(9,0,14)(12,1,24)12 is the fittest model. While in the measure of
accuracy, the selected model achieved 0.062 of Theils U value. It implied that
the model was highly accurate and a close fit. It was also indicated the
capability of final model to closely represent and made prediction based on the
tuberculosis historical dataset. | computer science |
40,593 | Low-rank Matrix Completion with Noisy Observations: a Quantitative
Comparison | cs.LG | We consider a problem of significant practical importance, namely, the
reconstruction of a low-rank data matrix from a small subset of its entries.
This problem appears in many areas such as collaborative filtering, computer
vision and wireless sensor networks. In this paper, we focus on the matrix
completion problem in the case when the observed samples are corrupted by
noise. We compare the performance of three state-of-the-art matrix completion
algorithms (OptSpace, ADMiRA and FPCA) on a single simulation platform and
present numerical results. We show that in practice these efficient algorithms
can be used to reconstruct real data matrices, as well as randomly generated
matrices, accurately. | computer science |
40,594 | Strategies for online inference of model-based clustering in large and
growing networks | stat.AP | In this paper we adapt online estimation strategies to perform model-based
clustering on large networks. Our work focuses on two algorithms, the first
based on the SAEM algorithm, and the second on variational methods. These two
strategies are compared with existing approaches on simulated and real data. We
use the method to decipher the connexion structure of the political websphere
during the US political campaign in 2008. We show that our online EM-based
algorithms offer a good trade-off between precision and speed, when estimating
parameters for mixture distributions in the context of random graphs. | computer science |
40,595 | On Learning Finite-State Quantum Sources | cs.LG | We examine the complexity of learning the distributions produced by
finite-state quantum sources. We show how prior techniques for learning hidden
Markov models can be adapted to the quantum generator model to find that the
analogous state of affairs holds: information-theoretically, a polynomial
number of samples suffice to approximately identify the distribution, but
computationally, the problem is as hard as learning parities with noise, a
notorious open question in computational learning theory. | computer science |
40,596 | A Gradient Descent Algorithm on the Grassman Manifold for Matrix
Completion | cs.NA | We consider the problem of reconstructing a low-rank matrix from a small
subset of its entries. In this paper, we describe the implementation of an
efficient algorithm called OptSpace, based on singular value decomposition
followed by local manifold optimization, for solving the low-rank matrix
completion problem. It has been shown that if the number of revealed entries is
large enough, the output of singular value decomposition gives a good estimate
for the original matrix, so that local optimization reconstructs the correct
matrix with high probability. We present numerical results which show that this
algorithm can reconstruct the low rank matrix exactly from a very small subset
of its entries. We further study the robustness of the algorithm with respect
to noise, and its performance on actual collaborative filtering datasets. | computer science |
40,597 | Multi-path Probabilistic Available Bandwidth Estimation through Bayesian
Active Learning | cs.NI | Knowing the largest rate at which data can be sent on an end-to-end path such
that the egress rate is equal to the ingress rate with high probability can be
very practical when choosing transmission rates in video streaming or selecting
peers in peer-to-peer applications. We introduce probabilistic available
bandwidth, which is defined in terms of ingress rates and egress rates of
traffic on a path, rather than in terms of capacity and utilization of the
constituent links of the path like the standard available bandwidth metric. In
this paper, we describe a distributed algorithm, based on a probabilistic
graphical model and Bayesian active learning, for simultaneously estimating the
probabilistic available bandwidth of multiple paths through a network. Our
procedure exploits the fact that each packet train provides information not
only about the path it traverses, but also about any path that shares a link
with the monitored path. Simulations and PlanetLab experiments indicate that
this process can dramatically reduce the number of probes required to generate
accurate estimates. | computer science |
40,598 | Online Learning in Opportunistic Spectrum Access: A Restless Bandit
Approach | math.OC | We consider an opportunistic spectrum access (OSA) problem where the
time-varying condition of each channel (e.g., as a result of random fading or
certain primary users' activities) is modeled as an arbitrary finite-state
Markov chain. At each instance of time, a (secondary) user probes a channel and
collects a certain reward as a function of the state of the channel (e.g., good
channel condition results in higher data rate for the user). Each channel has
potentially different state space and statistics, both unknown to the user, who
tries to learn which one is the best as it goes and maximizes its usage of the
best channel. The objective is to construct a good online learning algorithm so
as to minimize the difference between the user's performance in total rewards
and that of using the best channel (on average) had it known which one is the
best from a priori knowledge of the channel statistics (also known as the
regret). This is a classic exploration and exploitation problem and results
abound when the reward processes are assumed to be iid. Compared to prior work,
the biggest difference is that in our case the reward process is assumed to be
Markovian, of which iid is a special case. In addition, the reward processes
are restless in that the channel conditions will continue to evolve independent
of the user's actions. This leads to a restless bandit problem, for which there
exists little result on either algorithms or performance bounds in this
learning context to the best of our knowledge. In this paper we introduce an
algorithm that utilizes regenerative cycles of a Markov chain and computes a
sample-mean based index policy, and show that under mild conditions on the
state transition probabilities of the Markov chains this algorithm achieves
logarithmic regret uniformly over time, and that this regret bound is also
optimal. | computer science |
40,599 | Converged Algorithms for Orthogonal Nonnegative Matrix Factorizations | cs.LG | This paper proposes uni-orthogonal and bi-orthogonal nonnegative matrix
factorization algorithms with robust convergence proofs. We design the
algorithms based on the work of Lee and Seung [1], and derive the converged
versions by utilizing ideas from the work of Lin [2]. The experimental results
confirm the theoretical guarantees of the convergences. | computer science |
40,600 | Resource-bounded Dimension in Computational Learning Theory | cs.CC | This paper focuses on the relation between computational learning theory and
resource-bounded dimension. We intend to establish close connections between
the learnability/nonlearnability of a concept class and its corresponding size
in terms of effective dimension, which will allow the use of powerful dimension
techniques in computational learning and viceversa, the import of learning
results into complexity via dimension. Firstly, we obtain a tight result on the
dimension of online mistake-bound learnable classes. Secondly, in relation with
PAC learning, we show that the polynomial-space dimension of PAC learnable
classes of concepts is zero. This provides a hypothesis on effective dimension
that implies the inherent unpredictability of concept classes (the classes that
verify this property are classes not efficiently PAC learnable using any
hypothesis). Thirdly, in relation to space dimension of classes that are
learnable by membership query algorithms, the main result proves that
polynomial-space dimension of concept classes learnable by a membership-query
algorithm is zero. | computer science |
40,601 | Efficient Minimization of Decomposable Submodular Functions | cs.LG | Many combinatorial problems arising in machine learning can be reduced to the
problem of minimizing a submodular function. Submodular functions are a natural
discrete analog of convex functions, and can be minimized in strongly
polynomial time. Unfortunately, state-of-the-art algorithms for general
submodular minimization are intractable for larger problems. In this paper, we
introduce a novel subclass of submodular minimization problems that we call
decomposable. Decomposable submodular functions are those that can be
represented as sums of concave functions applied to modular functions. We
develop an algorithm, SLG, that can efficiently minimize decomposable
submodular functions with tens of thousands of variables. Our algorithm
exploits recent results in smoothed convex minimization. We apply SLG to
synthetic benchmarks and a joint classification-and-segmentation task, and show
that it outperforms the state-of-the-art general purpose submodular
minimization algorithms by several orders of magnitude. | computer science |
40,602 | A Primal-Dual Convergence Analysis of Boosting | cs.LG | Boosting combines weak learners into a predictor with low empirical risk. Its
dual constructs a high entropy distribution upon which weak learners and
training labels are uncorrelated. This manuscript studies this primal-dual
relationship under a broad family of losses, including the exponential loss of
AdaBoost and the logistic loss, revealing:
- Weak learnability aids the whole loss family: for any {\epsilon}>0,
O(ln(1/{\epsilon})) iterations suffice to produce a predictor with empirical
risk {\epsilon}-close to the infimum;
- The circumstances granting the existence of an empirical risk minimizer may
be characterized in terms of the primal and dual problems, yielding a new proof
of the known rate O(ln(1/{\epsilon}));
- Arbitrary instances may be decomposed into the above two, granting rate
O(1/{\epsilon}), with a matching lower bound provided for the logistic loss. | computer science |
40,603 | File Transfer Application For Sharing Femto Access | cs.NI | In wireless access network optimization, today's main challenges reside in
traffic offload and in the improvement of both capacity and coverage networks.
The operators are interested in solving their localized coverage and capacity
problems in areas where the macro network signal is not able to serve the
demand for mobile data. Thus, the major issue for operators is to find the best
solution at reasonable expanses. The femto cell seems to be the answer to this
problematic. In this work (This work is supported by the COMET project AWARE.
http://www.ftw.at/news/project-start-for-aware-ftw), we focus on the problem of
sharing femto access between a same mobile operator's customers. This problem
can be modeled as a game where service requesters customers (SRCs) and service
providers customers (SPCs) are the players.
This work addresses the sharing femto access problem considering only one SPC
using game theory tools. We consider that SRCs are static and have some similar
and regular connection behavior. We also note that the SPC and each SRC have a
software embedded respectively on its femto access, user equipment (UE).
After each connection requested by a SRC, its software will learn the
strategy increasing its gain knowing that no information about the other SRCs
strategies is given. The following article presents a distributed learning
algorithm with incomplete information running in SRCs software. We will then
answer the following questions for a game with $N$ SRCs and one SPC: how many
connections are necessary for each SRC in order to learn the strategy
maximizing its gain? Does this algorithm converge to a stable state? If yes,
does this state a Nash Equilibrium and is there any way to optimize the
learning process duration time triggered by SRCs software? | computer science |
40,604 | Inference algorithms for pattern-based CRFs on sequence data | cs.LG | We consider Conditional Random Fields (CRFs) with pattern-based potentials
defined on a chain. In this model the energy of a string (labeling) $x_1...x_n$
is the sum of terms over intervals $[i,j]$ where each term is non-zero only if
the substring $x_i...x_j$ equals a prespecified pattern $\alpha$. Such CRFs can
be naturally applied to many sequence tagging problems.
We present efficient algorithms for the three standard inference tasks in a
CRF, namely computing (i) the partition function, (ii) marginals, and (iii)
computing the MAP. Their complexities are respectively $O(n L)$, $O(n L
\ell_{max})$ and $O(n L \min\{|D|,\log (\ell_{max}+1)\})$ where $L$ is the
combined length of input patterns, $\ell_{max}$ is the maximum length of a
pattern, and $D$ is the input alphabet. This improves on the previous
algorithms of (Ye et al., 2009) whose complexities are respectively $O(n L
|D|)$, $O(n |\Gamma| L^2 \ell_{max}^2)$ and $O(n L |D|)$, where $|\Gamma|$ is
the number of input patterns.
In addition, we give an efficient algorithm for sampling. Finally, we
consider the case of non-positive weights. (Komodakis & Paragios, 2009) gave an
$O(n L)$ algorithm for computing the MAP. We present a modification that has
the same worst-case complexity but can beat it in the best case. | computer science |
40,605 | Learning from Collective Intelligence in Groups | cs.SI | Collective intelligence, which aggregates the shared information from large
crowds, is often negatively impacted by unreliable information sources with the
low quality data. This becomes a barrier to the effective use of collective
intelligence in a variety of applications. In order to address this issue, we
propose a probabilistic model to jointly assess the reliability of sources and
find the true data. We observe that different sources are often not independent
of each other. Instead, sources are prone to be mutually influenced, which
makes them dependent when sharing information with each other. High dependency
between sources makes collective intelligence vulnerable to the overuse of
redundant (and possibly incorrect) information from the dependent sources.
Thus, we reveal the latent group structure among dependent sources, and
aggregate the information at the group level rather than from individual
sources directly. This can prevent the collective intelligence from being
inappropriately dominated by dependent sources. We will also explicitly reveal
the reliability of groups, and minimize the negative impacts of unreliable
groups. Experimental results on real-world data sets show the effectiveness of
the proposed approach with respect to existing algorithms. | computer science |
40,606 | Sensory Anticipation of Optical Flow in Mobile Robotics | cs.RO | In order to anticipate dangerous events, like a collision, an agent needs to
make long-term predictions. However, those are challenging due to uncertainties
in internal and external variables and environment dynamics. A sensorimotor
model is acquired online by the mobile robot using a state-of-the-art method
that learns the optical flow distribution in images, both in space and time.
The learnt model is used to anticipate the optical flow up to a given time
horizon and to predict an imminent collision by using reinforcement learning.
We demonstrate that multi-modal predictions reduce to simpler distributions
once actions are taken into account. | computer science |
40,607 | A Benchmark to Select Data Mining Based Classification Algorithms For
Business Intelligence And Decision Support Systems | cs.DB | DSS serve the management, operations, and planning levels of an organization
and help to make decisions, which may be rapidly changing and not easily
specified in advance. Data mining has a vital role to extract important
information to help in decision making of a decision support system.
Integration of data mining and decision support systems (DSS) can lead to the
improved performance and can enable the tackling of new types of problems.
Artificial Intelligence methods are improving the quality of decision support,
and have become embedded in many applications ranges from ant locking
automobile brakes to these days interactive search engines. It provides various
machine learning techniques to support data mining. The classification is one
of the main and valuable tasks of data mining. Several types of classification
algorithms have been suggested, tested and compared to determine the future
trends based on unseen data. There has been no single algorithm found to be
superior over all others for all data sets. The objective of this paper is to
compare various classification algorithms that have been frequently used in
data mining for decision support systems. Three decision trees based
algorithms, one artificial neural network, one statistical, one support vector
machines with and without ada boost and one clustering algorithm are tested and
compared on four data sets from different domains in terms of predictive
accuracy, error rate, classification index, comprehensibility and training
time. Experimental results demonstrate that Genetic Algorithm (GA) and support
vector machines based algorithms are better in terms of predictive accuracy.
SVM without adaboost shall be the first choice in context of speed and
predictive accuracy. Adaboost improves the accuracy of SVM but on the cost of
large training time. | computer science |
40,608 | Deterministic MDPs with Adversarial Rewards and Bandit Feedback | cs.GT | We consider a Markov decision process with deterministic state transition
dynamics, adversarially generated rewards that change arbitrarily from round to
round, and a bandit feedback model in which the decision maker only observes
the rewards it receives. In this setting, we present a novel and efficient
online decision making algorithm named MarcoPolo. Under mild assumptions on the
structure of the transition dynamics, we prove that MarcoPolo enjoys a regret
of O(T^(3/4)sqrt(log(T))) against the best deterministic policy in hindsight.
Specifically, our analysis does not rely on the stringent unichain assumption,
which dominates much of the previous work on this topic. | computer science |
40,609 | A Novel Learning Algorithm for Bayesian Network and Its Efficient
Implementation on GPU | cs.DC | Computational inference of causal relationships underlying complex networks,
such as gene-regulatory pathways, is NP-complete due to its combinatorial
nature when permuting all possible interactions. Markov chain Monte Carlo
(MCMC) has been introduced to sample only part of the combinations while still
guaranteeing convergence and traversability, which therefore becomes widely
used. However, MCMC is not able to perform efficiently enough for networks that
have more than 15~20 nodes because of the computational complexity. In this
paper, we use general purpose processor (GPP) and general purpose graphics
processing unit (GPGPU) to implement and accelerate a novel Bayesian network
learning algorithm. With a hash-table-based memory-saving strategy and a novel
task assigning strategy, we achieve a 10-fold acceleration per iteration than
using a serial GPP. Specially, we use a greedy method to search for the best
graph from a given order. We incorporate a prior component in the current
scoring function, which further facilitates the searching. Overall, we are able
to apply this system to networks with more than 60 nodes, allowing inferences
and modeling of bigger and more complex networks than current methods. | computer science |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.