id
stringlengths
7
12
sentence1
stringlengths
5
1.44k
sentence2
stringlengths
6
2.06k
label
stringclasses
4 values
domain
stringclasses
5 values
train_500
If no match is found, we assign the fixation as null.
due to noise, we allow the spatial support to be increased by a factor.
contrasting
NeurIPS
train_501
The second intuition is to use the online least squares formulation of the linear equation Ax = b.
since A is not symmetric and positive semi-definite (PSD), A 1 2 does not exist and thus Ax = b cannot be reformulated as Another possible idea is to attempt to find an objective function whose gradient is exactly A t x t − b t and thus the regularized gradient is prox αth(xt) (A t x t − b t ).
contrasting
NeurIPS
train_502
First, the Tensor Toolbox [7] uses the method of reducing indices of the tensor for sparse datasets and entrywise multiplication of vectors and matrices for dense datasets.
it is not clear how to store data or how to distribute the tensor factorization computation to multiple machines (see Appendix D).
contrasting
NeurIPS
train_503
Our original goal was to develop BO algorithms for aiding this process.
many aspects of this domain complicate the application of BO.
contrasting
NeurIPS
train_504
The attention mechanism that we have employed is just one instantiation of a very general idea which can be further exploited.
the incorporation of world knowledge and multi-document queries will also require the development of attention and embedding mechanisms whose complexity to query does not scale linearly with the data set size.
contrasting
NeurIPS
train_505
5, but we allow the distribution of the standardized variable to depend (at least weakly) on v. We now generalize the reparameterization idea to distributions that, like the gamma or the beta, do not admit the standard reparameterization trick.
we assume that we can efficiently sample from the variational distribution q.zI v/, and that q.zI v/ is differentiable with respect to z and v. We introduce a random variable defined by an invertible transformation where we can think of D T 1 .zI v/ as a standardization procedure that attempts to make the distribution of weakly dependent on the variational parameters v. "Weakly" means that at least its first moment does not depend on v. For instance, if is defined to have zero mean, then its first moment has become independent of v. we do not assume that the resulting distribution of is completely independent of the variational parameters v, and therefore we write it as q . I v/.
contrasting
NeurIPS
train_506
This approach is, as we have discussed, suitable for a standard Bloom filter, where the false positive rate is guaranteed to be close to its expected value for any test set, with high probability.
this methodology requires additional assumptions in the learned Bloom filter setting.
contrasting
NeurIPS
train_507
Our results indicate that in finite dimensions, an efficient, compression-based, Bayes-consistent multiclass 1-NN algorithm exists, and hence can be offered as an alternative to k-NN, which is well known to be Bayes-consistent in finite dimensions [12,41].
in infinite dimensions, our results show that the condition characterizing the Bayes-consistency of k-NN does not extend to all NN algorithms.
contrasting
NeurIPS
train_508
The variable X 0 may be distributed as X 0 ∼ p(•|θ 1 ) for some true parameter θ 1 .
in order to incorporate under-modeling, the existence of such a true parameter is not required.
contrasting
NeurIPS
train_509
The nonparametric online algorithm of [6] is known to have a suboptimal regret bound for Lipschitz classes of functions.
it is a simple and efficient algorithm, well suited to the design of extensions that exhibit different forms of adaptivity to the data sequence.
contrasting
NeurIPS
train_510
Due to the exchangeability of the individuals, the case where Friend(A, B) and Friend(A, C) are assigned to True and False respectively has the same WMC as the case where they are assigned to False and True.
the current engines fail to exploit this symmetry as they consider grounded individuals non-exchangeable.
contrasting
NeurIPS
train_511
Thus, computing ∂/∂φ j of (19) exactly is mechanical.
using this approximation gives up the usual guarantee that the ELBO lower bounds the marginal likelihood.
contrasting
NeurIPS
train_512
On the surface, our network appears similar to AC-GAN [24], where the only difference is the separation of the classifier network from the authenticator network.
this crucial modularisation enables our DA algorithm to replace GANs by other generative models that may become available in the future; likewise, we can use the most sophisticated classification models for C. Furthermore, unlike our model, the classification subnetwork introduced in AC-GAN mainly aims for improving the quality of synthesized samples, rather than for classification tasks.
contrasting
NeurIPS
train_513
To solve this problem, Ng & Russell (2000) proposed to first learn a reward function, assuming that the expert is optimal, and then use it to recover the expert's complete policy.
the problem of learning a reward function given an optimal policy is ill-posed (Abbeel & Ng, 2004).
contrasting
NeurIPS
train_514
We cannot directly take the large £ limit, since for large word lengths we eventually reach a data sampling limit beyond which we are unable to reliably compute the word distributions.
if there is a range of £ for which the distributions are sufficiently well sampled, the behavior in Eq.
contrasting
NeurIPS
train_515
For the evaluation purpose alone, it is enough to use only a query q itself as an example.
we include both one target node (among potentially many other target nodes) and one path from the starting node to this target node (again, among many possible connecting paths) so that they can be exploited when training an agent.
contrasting
NeurIPS
train_516
In most previous studies of MAB and linear stochastic bandits, a common assumption is that noises in observed payoffs are sub-Gaussian conditional on historical information (Abbasi-Yadkori et al., 2011;Bubeck et al., 2012), which encompasses cases of all bounded payoffs and many unbounded payoffs, e.g., payoffs of an arm following a Gaussian distribution.
there do exist practical scenarios of non-sub-Gaussian noises in observed payoffs for sequential decisions, such as highprobability extreme returns in investments for financial markets (Cont and Bouchaud, 2000) and fluctuations of neural oscillations (Roberts et al., 2015), which are called heavy-tailed noises.
contrasting
NeurIPS
train_517
[17,18] prove that the REC is satisfied when the design matrix is sub-Gaussian.
rEC might not be guaranteed when the row of X follows heavy-tailed distribution.
contrasting
NeurIPS
train_518
Binarized Networks, Quantized Networks and the DoReFa-Net.
we believe Flexpoint strikes a desirable balance between aggressive extraction of performance and support for a wide collection of existing models.
contrasting
NeurIPS
train_519
When shown a set of M = 36 images consisting mostly of different types of outdoor scenes and a few indoor scenes, it is reasonable for a worker to consider the indoor scenes as a unified category.
if a HIT is composed purely of indoor scenes, a worker might draw finer distinctions between images of offices, kitchens, and living rooms.
contrasting
NeurIPS
train_520
In the beginning, Greedy chooses to execute distractors, since they gives positive reward while subtasks A, B, and C do not.
gRProp observes non-zero gradient for subtasks A, B, and C that are propagated from the parent nodes.
contrasting
NeurIPS
train_521
SRNN (Fraccaro et al., 2016) addresses the issue by running a posterior backward in the sequence and thus providing future context for the current prediction.
the autoregressive decoder is not informed about the future of the sequence through the latent variables.
contrasting
NeurIPS
train_522
Floating point encodes numbers with one exponent per value (Figure 1a), requiring complex hardware structures to manage mantissa alignment and exponent values.
bFP (Figure 1b) shares exponents across blocks of numbers (or tensors), which enables dense fixed point logic for multiply-and-accumulate operations.
contrasting
NeurIPS
train_523
This strategy applied to active learning is well known because of its simplicity and its ability to adapt to unknown noise conditions [24].
we mention that when used in this way, this sampling procedure is known to be sub-optimal so in practice, one may want to implement a more efficient approach like that of [21].
contrasting
NeurIPS
train_524
In spite of the empirical success in many applications, most of the existing models and the corresponding estimators are based on specific distributions of pairwise measurements, whose correctness hinges on that the models are correctly specified.
in practice, without the knowledge of the true model, applying these methods might incur huge estimation errors.
contrasting
NeurIPS
train_525
First, the convolutional layers of DeepSpeech2 contain more phonetic information than those of DeepSpeech2-light (+1% and +4% for cnn1 and cnn2, respectively).
the recurrent layers in DeepSpeech2-light are better, with the best result of 37.77% in DeepSpeech2-light (by lstm3) compared to 33.67% in DeepSpeech2 (by rnn5).
contrasting
NeurIPS
train_526
We also anticipate that such analysis is more easily extendable.
the new form of the gradient step due to nonsmoothness of absolute function requires new developments of bounding techniques.
contrasting
NeurIPS
train_527
Note that the full Hessian, H, will in general, not be positive definite (in fact rank(H) = rank(H)).
based on its special structure, we can still give convergence guarantees (along with rate of convergence) for our algorithm.
contrasting
NeurIPS
train_528
Note that SVP-Newton incurs a RMSE of 0.89 for k = 3.
sVT achieves a RMSE of 0.98 for the same rank.
contrasting
NeurIPS
train_529
Using recurrent neural networks to predict sequences of tokens has many useful applications like machine translation and image description.
the current approach to training them, predicting one token at a time, conditioned on the state and the previous correct token, is different from how we actually use them and thus is prone to the accumulation of errors along the decision paths.
contrasting
NeurIPS
train_530
When using convolutions as part of a larger network, with multiple parallel filters, max pooling, and non-linear activations, the situation is of course more complex, and we do not expect to get the exact same bias.
we do expect the bias to be at the very least related to the sparsity-infrequency-domain bias that we uncover here, and we hope our work can serve as a basis for further such study.
contrasting
NeurIPS
train_531
For example, [2] builds a model that couples this point of view with a representation in terms of deep multi-task Gaussian Processes with vector-valued kernels.
the objective in [2] is to predict fixed time risk (e.g.
contrasting
NeurIPS
train_532
Figure 2(b) shows higher ACh ( ¡ ) and NE (© ¡ levels both correspond to fast learning, ie fast shifting of `¡ .
whereas NE is a constant monitor of prediction errors and fluctuates accordingly with every data point, ACh falls smoothly and predictably, and only depends on the observations when global changes in the environment have been detected.
contrasting
NeurIPS
train_533
Block Partial Leverage Scores Sampling Recall standard leverage scores of a matrix A are defined as diagonal elements of the "hat" matrix A(A T A) −1 A T [15] which prove to be very useful in matrix approximation algorithms.
in contrast to the standard case, there are two major differences in our task.
contrasting
NeurIPS
train_534
As a result, the vertices of the motifs are partitioned with a uniform cost.
this assumption is hardly realistic as in many real networks, only some vertices of higher-order structures may need to be clustered together.
contrasting
NeurIPS
train_535
The work [9] proves that the proximal Coordinate Descent method can solve each QPs at a linear rate even when matrix A is not full column rank.
there exists several drawbacks in this approach: (i) the practical solving time of each subproblem is quite long when A is rank-deficient; (ii) the theoretical performance and complexity of using recent accelerated techniques in proximal optimization [14] with the ALM is unknown; (iii) it cannot exploit the specific structure of matrix A when solving each constrained QP.
contrasting
NeurIPS
train_536
Previous generative approaches include modeling worlds of painted polyhedra [11] or constructing surfaces from patches taken out of a training set [3].
discriminative approaches attempt to differentiate between changes in the image caused by shading and those caused by a reflectance change.
contrasting
NeurIPS
train_537
If at least one point is violated, then the new set (Q, b, s,~) is not feasible for the KKT system (1) with the extended data set.
it is easy to find p such that (Q, b, s, ~) is optimal for (3).
contrasting
NeurIPS
train_538
Due to space constraints, we relegate the details of our experiments to the appendix in the supplemental documents.
the results of the experiments are clear-Figures 1(g), 2(g), 3(g), and 4(g) show the mean squared error (MSE) of value estimates when using various methods.
contrasting
NeurIPS
train_539
Recently, the convolutional neural networks (CNNs) have been greatly successful for large-scale image classification tasks [17,30,27] and have also demonstrated promising results for structured prediction tasks (e.g., [4,23,22]).
the CNNs are not suitable in modeling a distribution with multiple modes [32].
contrasting
NeurIPS
train_540
We show in the supplementary material that if z = (z, 0) and z(i) denotes the ith largest element of z, In particular, if Y ∈ Y = {−1, 1} is binary the dual problem (11) for learning the optimal linear predictor α * given n samples (x i , y i ) n i=1 will be The first term is the empirical risk of a linear classifier over the minimax-hinge loss max{0, 1−z 2 , −z} as shown in Figure 2.
the standard SVM is formulated using the hinge loss max{0, 1 − z}: We therefore call this classification approach the minimax SVM.
contrasting
NeurIPS
train_541
We do not have an analytic solution for these equations.
the decomposition they offer allows us to solve them by searching first over b to solve (7), then plugging the result into (8) to get an estimate of a.
contrasting
NeurIPS
train_542
Given space considerations, and the fact that the resulting algorithm turns out to reduce to Algorithm 2 from [3] with the squared Euclidean distance replaced by an appropriate Bregman divergence, we will omit the full specification of the algorithm here.
despite the similarity to the existing Gaussian case, we do view the extension to hierarchies as a promising application of our analysis.
contrasting
NeurIPS
train_543
when the algorithm can be confident that µ j1 > µ j k . This chaining approach initially seems overly conservative when ruling out arms, as reflected in its regret bound, which is only non-trivial after T k 3 .
the UCB algorithm [5] achieves non-trivial regret after T = O(k) rounds.
contrasting
NeurIPS
train_544
. . , u(x n )) T ∈ R n and the covariance or kernel matrix by The Bayesian posterior process for u(•) can be computed in principle using Bayes' formula.
if the noise model P (y|u) is non-Gaussian (as is the case for binary classification), it cannot be handled tractably and is usually approximated by another Gaussian process, which should ideally preserve mean and covariance function of the former.
contrasting
NeurIPS
train_545
The coding of information by neural populations depends critically on the statistical dependencies between neuronal responses.
there is no simple model that can simultaneously account for (1) marginal distributions over single-neuron spike counts that are discrete and non-negative; and (2) joint distributions over the responses of multiple neurons that are often strongly dependent.
contrasting
NeurIPS
train_546
If the normal to the water surface directly underneath x is pointing straight up, there is no refraction and V (x) = G(x).
if the normal is tilted by angle θ 1 , light will bend by the amount θ 2 = θ 1 − sin −1 ( 1 1.33 sin θ 1 ), so the camera point V (x) will see the light projected from G(x + dx) on the ground plane.
contrasting
NeurIPS
train_547
It is important to note here that the GE operators themselves add layers to the architecture (thus this experiment does not control precisely for network depth).
they do so in an extremely lightweight manner in comparison to the standard computational blocks that form the network and we observe that the improvements achieved by GE transfer to the deeper ResNet-101 baseline, suggesting that to a reasonable degree, these gains are complementary to increasing the depth of the underlying backbone network.
contrasting
NeurIPS
train_548
In the special case when G h ≡ G ∪ , this gives the graph estimate of the components.
the union graph G ∪ appears to have no direct relationship with the marginalized model P (y).
contrasting
NeurIPS
train_549
We also adopt the same prior on E given by ( 7) above and used in [1] and [31], but we need not assume any additional hyperprior on Γ.
for the prior on Z our method diverges, and we define the Gaussian where z vec[Z] is the column-wise vectorization of Z, ⊗ denotes the Kronecker product, and Ψ c ∈R n×n and Ψ r ∈R m×m are positive semi-definite, symmetric matrices.
contrasting
NeurIPS
train_550
The resurrection of neural networks in recent years, together with the recent emergence of large scale datasets, has enabled super-human performance on many classification tasks [21,28,30].
supervised DNNs often require a large number of training samples to achieve a high level of performance.
contrasting
NeurIPS
train_551
A short derivation (see supplementary material) then shows that the normalized importance weights are defined by a recursion: SIS is elegant as the samples and weights can be computed in sequential fashion using a single forward pass.
naïve implementation suffers from a severe pathology: the distribution of importance weights often become highly skewed as t increases, with many samples attaining very low weight.
contrasting
NeurIPS
train_552
2 Then the objective O and gradient ∇O can both be computed in O(nkb) time.
our experience with minimizing O with such an approach using a quasi-Newton L-BFGS algorithm typically resulted in poor local optima; we need an alternative method.
contrasting
NeurIPS
train_553
For all the experiments, we use the optimization program ( 5), where we typically set λ = 10.
the clustering and embedding results obtained by SMCE are stable for λ ∈ [1,200].
contrasting
NeurIPS
train_554
Our model differs from these in that it uses a global memory, with shared read and write functions.
with layer-wise weight tying our model can be viewed as a form of RNN which only produces an output after a fixed number of time steps (corresponding to the number of hops), with the intermediary steps involving memory input/output operations that update the internal state.
contrasting
NeurIPS
train_555
This implies linear convergence.
this only holds, if the loss reaches zero, i.e.
contrasting
NeurIPS
train_556
The same steps, but with φ(u) = − log u, lead to the bound ) is the so-called lautum information between M and (X n , Y n ) [26], and the second inequality holds whenever N ≥ 2.
it is often more convenient to choose Q as follows.
contrasting
NeurIPS
train_557
DRE has also been widely discussed in statistical literatures for adjusting non-parametric density estimation [5], stabilizing the estimation of heavy tailed distribution [7] and fitting multiple distributions at once [8].
as a density ratio function can grow unbounded, DRE can suffer from robustness and stability issues: a few corrupted points may completely mislead the estimator (see Figure 2 in Section 6 for example).
contrasting
NeurIPS
train_558
This method reduces the labeling cost to a great degree compared with expert labeling.
the cost still grows quadratically as the dataset size grows, so it is still only suitable for small datasets.
contrasting
NeurIPS
train_559
The assumption that the mode of the posterior distribution of the classifier remains unchanged after seeing an additional label is clearly not true at the beginning of the active learning procedure.
we have empirically found it a very good approximation after the active learning procedure has yielded as few as 15 labels.
contrasting
NeurIPS
train_560
weights, spikes) given data, x (i.e.
the fluorescence signal), and, for model comparison, we would like to compute the model evidence, the computation of these quantities is intractable, and this intractability has hindered the application of Bayesian techniques to large-scale data analysis, such as calcium imaging.
contrasting
NeurIPS
train_561
Our field of view is reduced by the output lenses.
in principle, it is possible to remove the lenses and expect the neural network to include the free-air propagation operator, from the fibre output to the camera, in its learning process as well.
contrasting
NeurIPS
train_562
On one hand, in neuroscience, recent studies point out that there are significant redundant neurons in human brain, and memory may have relation with vanishment of specific synapses [4].
in machine learning, both theoretical analysis and empirical experiments have shown the evidence of redundancy in several deep models [5,6].
contrasting
NeurIPS
train_563
Finally, the bandit arms may be indexed by numbers from the real line, implying uncountably infinite bandit arms, but where "nearby" arms (in terms of distance along the real line) have similar payoffs [12,14].
none of these approaches allows for arms to appear then disappear, which as we show later critically affects any regret bounds.
contrasting
NeurIPS
train_564
Smoothing We observed that the proposed sharpening methods indeed helped with long utterances.
all of them, and especially selecting the frame with the highest score, negatively affected the model's performance on the standard development set which mostly consists of short utterances.
contrasting
NeurIPS
train_565
As indicated by the confusion matrix in Figure 2 (right), our method results in clusters that correspond to reasonable categories.
it is clear that the data often has finer categorical distinctions that go undiscovered.
contrasting
NeurIPS
train_566
By choosing relevant tags, users aid in creating a more organized information system.
content owners may have their own individual objective, such as maximizing the exposure of their items to other browsing users.
contrasting
NeurIPS
train_567
In other words, if in the m th iteration we move from label f m to f m+1 then it is possible that there exists another labelling .
our analysis in the next section shows that we are still able to reduce the Gibbs energy sufficiently at each iteration so as to obtain the guarantees of the LP relaxation.
contrasting
NeurIPS
train_568
The classic GLM is a valuable tool for describing the relationship between stimuli and spike responses.
the GLM describes this map as a mathematically convenient linear-nonlinear cascade, which does not take account of the biophysical properties of neural processing.
contrasting
NeurIPS
train_569
A finite convex combination of copulas is a copula, so r (a) is a copula density.
given a set of estimated quantile values A, a suitable parameter values β (edge weight matrix) and θ (parameters for bivariate edge copulas) can be found by maximizing the log-likelihood of A: the parameter optimization of l (β, θ) cannot be done analytically.
contrasting
NeurIPS
train_570
The time complexity to answer a transitive query for a discrete CBN is exponential in the maximum number of parents in the worst case.
the sample complexity for queries in discrete and continuous CBNs remains polynomial in n as prescribed in the following theorems.
contrasting
NeurIPS
train_571
They are able to predict semantic features even for words for which they have not seen scans and experiment with differentiating between several zero-shot classes.
they do not classify new test instances into both seen and unseen classes.
contrasting
NeurIPS
train_572
A second related model is Hofmann's probabilistic latent semantic indexing (pLSI) [3], which posits that a document label d and a word ware conditionally independent given the hidden topic z : ( This model does capture the possibility that a document may contain multiple topics since p(zld) serve as the mixture weights of the topics.
a subtlety of pLSIand the crucial difference between it and LDA-is that d is a dummy index into the list of documents in the training set.
contrasting
NeurIPS
train_573
In order to guarantee that the estimation error β (t) − β * in step t of the EM algorithm is well controlled, we would like Q n (•|β (t−1) ) to be strongly concave at β * .
in the setting where n p, there might exist directions along which ) is flat, e.g., as in mixed linear regression and missing covariate regression.
contrasting
NeurIPS
train_574
This is due to the fact that the appearance (or descriptors) of keypoints differ considerably for large offset pairs (which is likely when the image set is large), leading to many false matches.
our method improves as the size of the image set increases.
contrasting
NeurIPS
train_575
Zero-Shot Learning (ZSL) is generally achieved via aligning the semantic relationships between the visual features and the corresponding class semantic descriptions.
using the global features to represent fine-grained images may lead to sub-optimal results since they neglect the discriminative differences of local regions.
contrasting
NeurIPS
train_576
The main challenge in solving such problems is that communication between the different machines is usually slow and constrained, at least compared to the speed of local processing.
the datasets involved in distributed learning are usually large and high-dimensional.
contrasting
NeurIPS
train_577
For example, in document data, we can reasonably assume that documents can be clustered based on their relations with different word clusters, while word clusters are formed according to their associations with distinct document clusters.
in the one-sided clustering mechanism, the duality between samples and features is not taken into consideration.
contrasting
NeurIPS
train_578
The original formulation of CCCP by Yuille and Rangarajan [30] deals with unconstrained and linearly constrained problems.
the same formulation can be extended to handle any constraints (both convex and non-convex).
contrasting
NeurIPS
train_579
Stochastic gradient methods for machine learning and optimization problems are usually analyzed assuming data points are sampled with replacement.
sampling without replacement is far less understood, yet in practice it is very common, often easier to implement, and usually performs better.
contrasting
NeurIPS
train_580
The simplest Poisson-based generalised-linear RLM might take as its output distribution where y ti is the spike count of the ith cell in bin t and the function f is non-negative.
comparison with the output distribution derived for the Gaussian RLM suggests that this choice would fail to capture the instantaneous covariance that the LDS formulation transfers to the output distribution (and which appears in the low-rank structure of S above).
contrasting
NeurIPS
train_581
where c k=1 kπ k = c. The distribution over the size of the subset a new item joins is then The additional complication however is that, unlike the typical CRP situation, the partitioning itself is unknown, and so we must marginalize over it under the prior when computing the distribution on η x .
we can recognize this as simply using the expected number of subsets of a particular size: where The probability of the partition π is the EPPF [21] multiplied by the unordered multinomial coefficient: where Γ(•) is the gamma function.
contrasting
NeurIPS
train_582
Interestingly, it is this same Jeffreys prior that forms the implicit weight prior of SBL (see [6], Section 5.1).
it is worth mentioning that other Jeffreys prior-based techniques, e.g., direct minimization of p(w) = i 1 |wi| subject to t = Φw, do not provide any SBL-like guarantees.
contrasting
NeurIPS
train_583
The key motivation behind RCRN is to provide expressive and powerful sequence encoding.
unlike stacked architectures, all RNN layers operate jointly on the same hierarchical level, effectively avoiding the need to go deeper.
contrasting
NeurIPS
train_584
Most of these results make specific assumptions on the cascades/graph structure, and assume a full observation setting.
in our problem, the structure of the social graph is assumed to be known, and the goal is to provably learn the underlying influence function.
contrasting
NeurIPS
train_585
The probabilistic interpretation opens doors to several extension of the basic setup proposed in [3] which suggested a maximum likelihood approach for parameter estimation.
it still assumes an apriori fixed number of canonical correlation components.
contrasting
NeurIPS
train_586
It is important to note that the linear time algorithm in [11,Section 5.5.1] is the key to obtaining a O(nd/ √ ) computational complexity for binary SVMs with bias mentioned in Section 5.1.
this method has been rediscovered independently by many authors (including us), with the earliest known reference to the best of our knowledge being [14] in 1990.
contrasting
NeurIPS
train_587
This is because CorrLDA can capture direct dependencies between languages, due to the constraints that topics have to be selected from the topics selected in the pivot language parts.
cI-LDA and SwitchLDA are too poorly constrained to effectively capture the dependencies between languages, as mentioned in Sections 2.1 and 2.2.
contrasting
NeurIPS
train_588
Then, K * hom (n) ≤ cK * , where The intuition is that as K increases, we experience diminishing returns with respect to µ because µ is bounded away from 1.
there is a loss due to decreasing L = n/K , the number of representatives.
contrasting
NeurIPS
train_589
The variance in the precision decreases as b increases.
for KSHcut the variance is larger and the precision barely increases after b = 80.
contrasting
NeurIPS
train_590
But, the problem definition in these works does not specify a target prediction vector or variable; the goal instead is to select diverse features regardless of whether the features are relevant for predicting a particular target variable.
our work requires us to simultaneously optimize for both feature selection and diversity objectives.
contrasting
NeurIPS
train_591
1 Equation ( 1) is the standard MLE estimator.
sometimes the unbiased MLE estimation is preferred, where m − 1 replaces m in the denominator.
contrasting
NeurIPS
train_592
This allows us to apply our results also to more general forms of regularizers, including squared p norm regularizers, r(w) = λ 2 w 2 p , for p < 1 ≤ 2 (see Corollary 2).
the reader may choose to read the paper always thinking of the norm w , and so also its dual norm w * , as the standard 2 -norm.
contrasting
NeurIPS
train_593
As such, there has recently been increased interest towards developing techniques for parallelizing these methods.
these algorithms are inherently sequential and are difficult to parallelize.
contrasting
NeurIPS
train_594
This would be quite expensive for large number of items N , since the number of labels scales asymptotically as T ∈ Ω(N 2 ).
we expect a noisy transitive property to hold: if items a and b are likely to be in the same cluster, and items b and c are (not) likely in the same cluster, then items a and c are (not) likely to be in the same cluster as well.
contrasting
NeurIPS
train_595
[25] use a similar LP formulation.
since they include all the constraints from the beginning and the null model is fully connected, their method is only applied to small toy problems.
contrasting
NeurIPS
train_596
As the teacher is unaware of the internal workings of the learner, she has no control over which of these two policies the learner will eventually learn by matching feature expectations.
the teacher can ensure that the learner achieves better performance in a worst case sense by providing demonstrations tailored to the problem at hand.
contrasting
NeurIPS
train_597
In the convergence proof [7,28], it is assumed that θ i converges to θ * as the number of iterations i increases, then the proof consists of showing that θ * is a critical point of p(θ|y).
in practice, either the E-step or M-step or both can be difficult to compute exactly, especially when working with deep learning models.
contrasting
NeurIPS
train_598
Simulation-based search with value function approximation has been investigated in large and also continuous MDPs, in combination with TD-learning [19] or Monte-Carlo control [3].
this has not been in a Bayes-adaptive setting.
contrasting
NeurIPS
train_599
Using a 5 component spectral mixture kernel we were able to fully reconstruct the hexagonal lattice structure of the true field despite the size of the observed region covering only about 2 times the length scale of the periodic pattern.
traditional methods (including GP-based inference with standard SE kernels) would fail completely at such extrapolation.
contrasting
NeurIPS