id
stringlengths
7
12
sentence1
stringlengths
5
1.44k
sentence2
stringlengths
6
2.06k
label
stringclasses
4 values
domain
stringclasses
5 values
train_600
This algorithm recursively computes the moments in a top-down fashion along the tree.
this algorithm breaks down when the graph is a DAG.
contrasting
NeurIPS
train_601
The algorithm prefers rare and unusual anchor words that form a poor basis, so topic clusters consist of the same high-frequency terms repeatedly, as shown in the upper third of Table 3.
our algorithm with AP rectification successfully learns themes similar to the probabilistic algorithm.
contrasting
NeurIPS
train_602
The currently fastest algorithm for SO, in [MBK + 15], has runtime O(n log 1 δ ) and an expected guarantee of (1 − 1/e) − δ .
the slightly slower, but still nearly linear time O( n δ log n δ ) thresholding algorithm in [BV14], has (the usual) deterministic guarantee of (1 − 1/e) − δ .
contrasting
NeurIPS
train_603
If the cascade is already rejecting the negative example, then this term becomes 0 and the classifier ignores its performance on the specific example.
if the cascade is performing poorly, then the term becomes increasingly large and the classifiers put large weights on that example.
contrasting
NeurIPS
train_604
With carefully engineered observation features, there does not appear to be very much to be gained from using higher order features.
in some situations, the training data does not come from the same distribution as the test data.
contrasting
NeurIPS
train_605
Especially 1 when wi = 0, then Fii is related to subgradient of w w.r.t to wi.
we can not set Fii = 0, otherwise the derived algorithm cannot be guaranteed to converge.
contrasting
NeurIPS
train_606
The state-of-the-art approach [12] which utilizes clutter reasoning in the image plane has an error of 21.2%.
our approach which uses a parametric model of clutter and simple 3D volumetric reasoning outperforms both the approaches and has an error of 16.2%.
contrasting
NeurIPS
train_607
If they are defined to match spatially the differences to detect, they can also improve the signal-to-noise ratio by averaging related signals.
the crux of the problem is how to define these ROIs in a principled way.
contrasting
NeurIPS
train_608
The ability to tractably simulate MCs along with the generic applicability has made Markov Chain Monte Carlo (MCMC) a method of choice and arguably the top algorithm of the twentieth century [1].
mCMC and its variations suffer from limitations in large state spaces, motivating the development of super-computation capabilities -be it nuclear physics [2,Chapter 8], Google's computation of PageRank [3], or stochastic simulation at-large [4].
contrasting
NeurIPS
train_609
Because of the known convergence of spectral projections, we can expect the same behavior asymptotically from the finite sample case.
the convergence speed is the crucial question.
contrasting
NeurIPS
train_610
These models appear promising for applications such as language modeling and machine translation.
they scale poorly in both space and time as the amount of memory grows -limiting their applicability to real-world domains.
contrasting
NeurIPS
train_611
This ill-conditioning issue of real-number erasure codes has also been recognized in a recent communication problem [20].
our novel way of combining all the partial results including those from the stragglers helps bypass the difficulty of inverting an ill-conditioned matrix.
contrasting
NeurIPS
train_612
We find that for x 0 = 1 + ", SGD always escapes and converges to x = 0 instead of staying in the initial basin, as long as " 6 = 0 and ⌘ is relatively large (one trajectory of is shown in Figure 2 in red).
sGD starting from x 0 = " with the same learning rate always converges to x = 0, and we never observe the escape phenomenon.
contrasting
NeurIPS
train_613
Solutions to this problem have found applications in many areas of science and engineering.
many real-world applications do not map neatly into this framework.
contrasting
NeurIPS
train_614
We see that the improper learning method for the constraint harms the model performance, partially because of the relatively low-quality model samples the constraint is trained to fit.
the proposed algorithm effectively improves the model results.
contrasting
NeurIPS
train_615
These bounds on τ lead to a deeper understanding of CoreCut.
they are difficult to implement in practice.
contrasting
NeurIPS
train_616
Figure 4 illustrates this transformation.
and where Because we have a matrix-variate t distribution, (H 1 , H 2 ) ∼ t m,d+p (δ − p, 0, αI (m) , I (d+p) ).
contrasting
NeurIPS
train_617
We define the biased conductance φ p (S) of a set S ⊂ {1, . . . , n} to be , where Z 0 is chosen according to a fixed distribution p. Just as with the standard definition of conductance, we can interpret biased conductance as an escape probability.
the initial state Z 0 is not chosen following the stationary distribution (as in the standard definition with a reversible chain) but following p instead.
contrasting
NeurIPS
train_618
[15] developed a model selection strategy for DPS by reusing the past observed sample sequences through the importance weighted cross-validation (IWCV).
iWCV requires heavy computational costs and includes computational instability when estimating the importance sampling weights.
contrasting
NeurIPS
train_619
Since the FCP clusters are unlabelled they do not suffer from label switching problems.
by having labelled clusters HMMs can share statistical strength among clusters across time steps (e.g.
contrasting
NeurIPS
train_620
Nevertheless, recently it has been shown to be possible to give a nontrivial certified lower bound of minimum adversarial distortion, and some recent progress has been made towards this direction by exploiting the piece-wise linear nature of ReLU activations.
a generic robustness certification for general activation functions still remains largely unexplored.
contrasting
NeurIPS
train_621
They have recently received increasing attention for developing optimization algorithms with fast convergence.
the studies of EBC in statistical learning are hitherto still limited.
contrasting
NeurIPS
train_622
Intuitively, this is what one would expect: Q-learning is based on the single estimator and Double Q-learning is based on the double estimator and in Section 2 we argued that the estimates by the single and double estimator both converge to the same answer in the limit.
this argument does not transfer immediately to bootstrapping action values, so we prove this result making use of the following lemma which was also used to prove convergence of Sarsa [20].
contrasting
NeurIPS
train_623
As a consequence, different nodes will be frequently exposed to missing data.
most current distributed data analysis methods are algebraic in nature and cannot seamlessly handle such missing data.
contrasting
NeurIPS
train_624
Here, I focus entirely on regression.
the basic conclusions regarding learning with kernel methods and NNs turn out to be valid more generally, e.g.
contrasting
NeurIPS
train_625
In the iris problem we obtained excellent clustering results using the first two principal components, whereas in the crabs problem, clustering that depicts correctly the classification necessitates components 2 and 3.
once this is realized, it does not harm to add the 1st component.
contrasting
NeurIPS
train_626
When d = 1, the quantile transforms F −1 (α j ) are uniquely defined as the points x j ∈ R satisfying F (X ≤ x j ) ≤ α j , where X is a random variable drawn from F . Equivalently, F −1 (α j ) can be identified with the unique hierarchical intervals [−∞, x j ].
when d > 1, intervals are replaced by sets C 1 ⊂ C 2 . . . ⊂, C q that satisfy F (C j ) = α j but are not uniquely defined.
contrasting
NeurIPS
train_627
This shows that even for a single bit, the problem of finding optimal codes is NP hard.
the analogy to graph partitioning suggests a relaxed version of the problem that leads to very efficient eigenvector solutions.
contrasting
NeurIPS
train_628
Thus they are difficult to generalize to new circumstances, where the counterparts are unknown or intractable to construct.
unsupervised manifold alignment learns from manifold structures and naturally avoids the above problem.
contrasting
NeurIPS
train_629
MP problem is often mapped to a graph, and reduced to a graph search problem.
different from our problem, the MP approaches aim to find an optimal path to the goal in the graph while avoiding obstacles similar to HTN approaches.
contrasting
NeurIPS
train_630
On the other hand, if the gradient calculations are stochastic, then a similar claim cannot be made.
for this case we have the upper bound .
contrasting
NeurIPS
train_631
All of these patches will be at similar depths, even if there are small discontinuities (such as a window on the wall of a building).
when viewed at the smallest scale, some adjacent patches are difficult to recognize as parts of the same object.
contrasting
NeurIPS
train_632
Thus, minimizing the general dynamic regret can automatically adapt to the nature of environments, stationary or dynamic.
the restricted dynamic regret is too pessimistic, and unsuitable for stationary problems.
contrasting
NeurIPS
train_633
At first, the lower bound for linear classifiers might suggest that ∞ -robustness requires an inherently larger sample complexity here as well.
in contrast to the Gaussian model, non-linear classifiers can achieve a significantly improved robustness.
contrasting
NeurIPS
train_634
If the user can pick any element and suggest that it should precede an entire block of elements it currently follows, then we can instead use an "INSERSION SORT" graph; interestingly, to ensure the property (*), this graph must be weighted.
as we show in Section 3, if the user can propose two arbitrary elements that should be swapped, there is no graph G with the property (*).
contrasting
NeurIPS
train_635
For keeping the stimulation waveforms as simple as possible, only postsynaptic adaptation has been included.
it has been shown that presynaptic short-term plasticity also has a strong influence on long-term learning [19,6].
contrasting
NeurIPS
train_636
Thus, if the blinding index falls short of the predetermined threshold, the data is effectively thrown away and the trial needs to be repeated.
if the blinding index exceeds the threshold, the analysis of data is performed in the same manner regardless of the actual value of the index, that is, regardless of whether it is just above the threshold or if it indicates perfect blinding.
contrasting
NeurIPS
train_637
If q is initially very far from p, Monte Carlo estimates will tend to under-estimate the partition function.
in the context of learning p, we may expect a random initialization of p to be approximately uniform; we may thus fit an initial q to this well-behaved distribution, and as we gradually learn or anneal p, q should be able to track p and produce useful estimates of the gradients of p and of Z(θ).
contrasting
NeurIPS
train_638
While neural machine translation (NMT) is making good progress in the past two years, tens of millions of bilingual sentence pairs are needed for its training.
human labeling is very costly.
contrasting
NeurIPS
train_639
This is perhaps most pronounced in a student/teacher scenario where the teacher provides positive feedback for successful communication and corrections for unsuccessful ones [8,22].
in general any reply from a dialog partner, teacher or not, is likely to contain an informative training signal for learning how to use language in subsequent conversations.
contrasting
NeurIPS
train_640
In practice, each party would own a validation set from which they can estimate the utility of a model.
due to the small size of the three datasets, we report the contamination and validation accuracy of a model on the single validation set created during pre-processing of the data.
contrasting
NeurIPS
train_641
In this case, the KL divergence is tractably computable (up to a neglectable prefactor).
this form of the KL divergence chooses q(s i |r) to be any one of a very large number of local modes of the posterior distribution p(s i |r).
contrasting
NeurIPS
train_642
For a certain layer, a single threshold can be set based on the average absolute value and variance of its connection weights.
to improve the robustness of our method, we use two thresholds a k and b k by importing a small margin t and set b k as a k + t in Equation (3).
contrasting
NeurIPS
train_643
This appears to be a key bottleneck and reinforces the view that the use of random features cannot provide computational benefits -it suggests m = Ω(n) random features are required to get a O (1/ √ n) rate.
these rates are conservative when viewed in the sense of excess risk.
contrasting
NeurIPS
train_644
It clusters trajectories and assumes that all the trajectories in a cluster are generated by a single reward function.
as a consequence of using EM clustering, we need to specify the number of clusters (i.e.
contrasting
NeurIPS
train_645
Their method produces comparable cuts to the ones in [5], while being computationally much more efficient.
they could not provide any convergence guarantee about their method.
contrasting
NeurIPS
train_646
The capacity of an LSTM network can be increased by widening and adding layers.
usually the former introduces additional parameters, while the latter increases the runtime.
contrasting
NeurIPS
train_647
A generative method builds a flexible probabilistic model for generating the noisy observations conditioned on the unknown true labels and some behavior assumptions, with examples of the Dawid-Skene (DS) estimator [5], the minimax entropy (Entropy) estimator 1 [24,25], and their variants.
a discriminative approach does not model the observations; it directly identifies the true labels via some aggregation rules.
contrasting
NeurIPS
train_648
. . , s m ] are scaling factors for the parameter w. To combat overfitting, we assign a Gamma prior Gam(λ v |c 0 , d 0 ) over λ v . Note that this generative model encourages w to align with the major eigenvectors with bigger eigenvalues.
eigenvectors are noisy and not all of them relevant to the classification taskwe need to select relevant eigenvectors (i.e.
contrasting
NeurIPS
train_649
[25] developed theoretical tools to recognize and generate G-convex functions as well as cone theoretic fixed point optimization algorithms.
none of these three works provided a global convergence rate analysis for their algorithms.
contrasting
NeurIPS
train_650
For general graphs, Problem 1 can only be solved exactly when S contains all n 2 true effective resistances.
given additional constraints on G, recovery is possible with much less information.
contrasting
NeurIPS
train_651
The difference between the two components of the GMM-FV is not as startling for lower dimensional SIFT features [13].
for CNN features, the discriminative power of variance statistics is exceptionally low.
contrasting
NeurIPS
train_652
One potential issue with having a loss at every step is that it might encourage the network to learn a greedy algorithm that gets stuck in a local minima.
the output function r separates the node hidden states and messages from the output probability distributions.
contrasting
NeurIPS
train_653
In other words, both methods are quite good at ranking for classification.
the classification rates of our method are better by about 10% for both 3-object and 4object cases.
contrasting
NeurIPS
train_654
On the one hand, short phases allow a high rate of adaptivity, since � X j is recomputed very often.
if a phase is too short, it is very unlikely that the estimate θj may be accurate enough to actually discard any arm.
contrasting
NeurIPS
train_655
The matrix exponentiated gradient updates ensure that the estimates for the rotation matrix stay on the manifold associated with the rotation group at each iteration.
with the matrix exponentiation at each step, the updates are computationally intensive and in fact the computational complexity of the updates is comparable to other approaches that would require repeated approximation and projection on to the manifold.
contrasting
NeurIPS
train_656
If we assume ideal conditions where both computation resources (CPU, GPU, other accelerators) and communication resources (communication links) are unlimited or abundant in counts/bandwidth, then the total runtime of Pipe-SGD is: where K denotes the iteration dependency or the gradient staleness.
we observe that the end-to-end training time in Pipe-SGD can be shortened by a factor of K. the ideal resource assumption doesn't hold in practice, because both computation and communication resources are strictly limited on each worker node in today's distributed systems.
contrasting
NeurIPS
train_657
Graph-like geometric structures arise naturally in many fields, both in modeling natural phenomena, and in understanding abstract procedures and simulations.
there has been only limited work on obtaining a general-purpose algorithm to automatically extract skeleton graph structures [2].
contrasting
NeurIPS
train_658
Using only the energy for training should, in principle, gives a good PES model.
the use of forces in the training process significantly reduces the number of snapshots needed to train a good model.
contrasting
NeurIPS
train_659
Adam, the first occasional human "driver", often takes control of his car to brake whereas Bob never interrupts his car.
when Bob's car is too close to Adam's car, Adam does not brake for he is afraid of a collision.
contrasting
NeurIPS
train_660
The estimate of the corresponding costto-go value inherits this variability.
it also displays a downward bias caused by the minimization over u t . This phenomenon is reminiscent of overfitting effects in statistics.
contrasting
NeurIPS
train_661
To some extent, the aforementioned two-layer sampling has relationship with directly sampling from the product space of query and document, and the (n, m)-sampling proposed in [4].
as shown below, they also have significant differences.
contrasting
NeurIPS
train_662
Similar Newton-type methods that relied on sub-sampling rather than sketching were also studied by [14].
they are chiefly concerned with the convergence of the iterates to the (stochastic) minimizer of the least squares problem, while we are chiefly concerned with the convergence of the iterates to the unknown regression coefficients β * .
contrasting
NeurIPS
train_663
We acknowledge that RANGE-LSH and SIMPLE-LSH are equivalent when all items have the same 2-norm.
mIPS is equivalent to angular similarity search in this case, and thus can be solved directly with sign random projection rather than SIMPLE-LSH.
contrasting
NeurIPS
train_664
Recognition involves sampling based on the strength of the associative activation of the list given a specific item and so is independent of the encoding strength of other items.
recall involves sampling from p(item|list) across all items, in which case, having a distribution favoring other items will reduce the probability that the unstrengthened items will be sampled.
contrasting
NeurIPS
train_665
A scene graph provides a structured description that captures these properties of an image.
reasoning about the relationships between objects is very challenging and only a few recent works have attempted to solve the problem of generating a scene graph from an image.
contrasting
NeurIPS
train_666
Compared with GPR applied to a lattice, lattice regression with a GPR bias again produces a lower RMSE on all five lattice resolutions.
for four of the five lattice resolutions, there is no performance improvement as judged by the statistical significance of the individual test errors.
contrasting
NeurIPS
train_667
Due to the high variance of the score and its sensitivity to some implementation details [19], it is difficult to have a precise evaluation of Tetris controllers.
our brief tour d'horizon of the literature, and in particular the work by Szita and Lőrincz [18] (optimizing the "Bertsekas features" by CE), indicate that ADP algorithms, even with relatively good features, have performed extremely worse than the methods that directly search in the space of policies (such as CE and genetic algorithms).
contrasting
NeurIPS
train_668
Unfortunately, we do not have access to the true pair marginals, but if we would have estimates .
with the new parameters the estimates ¦ $# % will be changed as well, and this procedure should be iterated.
contrasting
NeurIPS
train_669
In general, an RBF kernel will lead to an effective criterion for measuring the dependence between random variables, especially in time-series applications.
we could also choose linear kernels for k and l, for instance, to obtain computational savings.
contrasting
NeurIPS
train_670
Hence, knowledge of the onset times can help separate the evoked sources.
the onset times, which are determined by the experimental design and available during data analysis, are ignored by ICA.
contrasting
NeurIPS
train_671
The SBA algorithm is defined up to permutations of the nodes, so the estimated graphon is not canonical.
this does not affect the consistency properties of the SBA algorithm, as the consistency is measured w.r.t.
contrasting
NeurIPS
train_672
Note that the above expression does not have exactly the same form as the distribution in Equation ( 7) and is not exchangeable since it depends on the order the data arrive.
if we consider only the left-ordered class of matrices generated by the stochastic process then we obtain the exchangeable distribution in Equation (7).
contrasting
NeurIPS
train_673
The parameter, which was initially designed to avoid precision issues in practical implementations, is often overlooked.
it has been observed that very small in some applications has also resulted in performance issues, indicating that it has a role to play in convergence of the algorithm.
contrasting
NeurIPS
train_674
The cross-entropy error function is commonly employed to train W out for a classification task.
this error function unfortunately cannot be directly applied to the present case.
contrasting
NeurIPS
train_675
Consequently, the reported variability in the learned network structures tends to be smaller than the uncertainty determined by local search (without this additional information).
we are mainly interested in the bias induced by the bootstrap here, which can be expected to be largely unaffected by the search strategy.
contrasting
NeurIPS
train_676
The former was first developed in [8], and has enabled the convolution including second-nearest-neighbor luminance data only using nearest neighbor interconnects, thus greatly reducing the interconnect complexity.
the fill factor was sacrificed due to the pixel parallel organization.
contrasting
NeurIPS
train_677
Conventional wisdom dictates that the number of bins is a key parameter for the coarse-graining strategy and should be carefully chosen for the balance of statistical noise and discretization error.
we will show in what follows that it is justifiable to increase the number of bins to infinity.
contrasting
NeurIPS
train_678
In the Gaussian process regression the observation model is commonly assumed to be Gaussian, which is convenient in computational perspective.
the drawback is that the predictive accuracy of the model can be significantly compromised if the observations are contaminated by outliers.
contrasting
NeurIPS
train_679
In general, the actual sequence of play may fail to converge altogether, even in simple, finite games [16,24].
there is a number of recent works establishing the convergence of play in potential games with finite action sets under different assumptions for the number of players involved (continuous or finite) and the quality of the available feedback (perfect, semi-bandit/imperfect, or bandit/payoff-based) [5,11,14,19].
contrasting
NeurIPS
train_680
Figure 2b shows that uniform convergence also does not preserve the property of being a global function: all points on the middle part of the limit function (blue curve) are spurious local minima.
it suggests that uniform convergence preserves a slightly weaker property than being a global function.
contrasting
NeurIPS
train_681
A general disadvantage of KRR, is that it can be difficult to know which aspects of X are relied on to perform the regression.
the kernel IB framework provides an intermediate representation, allowing one to visualize the features that jointly account for both X and Y (figs.
contrasting
NeurIPS
train_682
Note for P ⊂ R m , P is contained in the linear span of at most m points from P , and similarly the exact Carathéodory theorem states any point q ∈ Convex(P ) is expressible as a convex combination of at most m + 1 points from P . As the conic hull lies between the linear case (with all combinations) and the convex case (with non-negative combinations summing to one), it is not surprising an exact conic Carathéodory theorem holds.
the linear analog of the approximate convex Caratheodory theorem does not hold, and so the following conic result is not a priori obvious.
contrasting
NeurIPS
train_683
(The MSE results did not stay the same because the image completions happened over different subsets of the pixels.)
the performance of the Poon architecture dropped considerably due to the fact that it was no longer able to take advantage of strong correlations between neighboring pixels.
contrasting
NeurIPS
train_684
Young children demonstrate the ability to make inferences about the preferences of other agents based on their choices.
there exists no overarching account of what children are doing when they learn about preferences or how they use that knowledge.
contrasting
NeurIPS
train_685
Note that the I-norm term Ilwlll can be replaced by one half the 2-norm squared, ~llwll~, which is the usual margin maximization term for ordinary support vector machine classifiers [18,3].
this changes the linear program (19) to a quadratic program which typically takes longer time to solve.
contrasting
NeurIPS
train_686
We argue in Section 5.1 that the unbalanced Haar wavelets are a special instance of graph Laplacian eigenbasis when the underlying graph is hierarchical.
a lattice graph structure yields activations that are globally supported and smooth, and in this case the Laplacian eigenbasis corresponds to the Fourier transform (see Section 5.2).
contrasting
NeurIPS
train_687
The advantage of SDCA over AGD is that each iteration involves only a single dual vector and usually costs O(d).
each iteration of AGD requires Ω(nd) operations.
contrasting
NeurIPS
train_688
Finally, we point out that for many of the datasets we consider, there is no significant difference between the LP based algorithm, and the Local Search (and sometimes even the Greedy) heuristic in terms of the sum-min objective value.
as we noted, the heuristics do not have worst case guarantees.
contrasting
NeurIPS
train_689
Power Multiset Classification A brute-force approach based on the Combination Method in multilabel classification [22,17], is to transform the class set C into a set M (C) of all possible multisets, then train a multi-class classifier π that maps an input x to one of the elements in M (C).
the number of all possible multisets grows exponentially in the maximum size of a target multiset, 2 . rendering this approach infeasible in practice.
contrasting
NeurIPS
train_690
For any data matrix X that satisfies the SSC and SSS properties such that Using Lemma 12 (see the appendix), we can translate the above result to show that w T0 − w * 2 ≤ 0.95σ + , assuming k * ≤ k ≤ n 150 .
lemma 5 will be more useful in the following fine convergence analysis.
contrasting
NeurIPS
train_691
This gives us, For state s = s t , this simplifies to Substituting this back into Equation ( 6) we obtain, This gives us an explicit expression for our V estimates.
from an algorithmic perspective an incremental update rule is more convenient.
contrasting
NeurIPS
train_692
Methods for learning from demonstration (LfD) have shown success in acquiring behavior policies by imitating a user.
even for a single task, LfD may require numerous demonstrations.
contrasting
NeurIPS
train_693
When using only a few trajectories, the diagonal includes fluctuations that can have significant impacts on the resulting weights.
when using many trajectories (which we treat as giving ground truth), the diagonal tends to be relatively smooth and monotonically increasing until it plateaus (ignoring the final entry).
contrasting
NeurIPS
train_694
For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of SDCA and SAG.
our proof is significantly simpler and more intuitive.
contrasting
NeurIPS
train_695
Note the single-variable problem ( 14) has closed-form solution which in a naive implementation, takes O(nnz(A)) time due to the computation of ( 11) and ( 12).
in a clever implementation, one can maintain the relation ( 11), (12) as follows whenever a coordinate x j is updated by where a j = [a I j ; a E j ] denotes the jth column of A I and A E . Then the gradient and (generalized) second-derivative of jth coordinate can be computed in O(nnz(a j )) time.
contrasting
NeurIPS
train_696
Thus, the MLE system output tends to obtain a high score despite being bland, because a MLE response by design is most "relevant" to any random response.
adding diversity without improving semantic relevance may occasionally hurt these relevance scores.
contrasting
NeurIPS
train_697
Roughly speaking, the events ω ∈ Ω represent abstract states of nature, i.e.
knowing the value of ω completely describes all probabilistic aspects of the model universe, and all random aspects are described by the probability measure P. Ω, A and P are never known explicitly, but rather constitute the modeling assumption that any explicitly known distribution P X is derived from one and the same probability measure P through some random variable X.
contrasting
NeurIPS
train_698
Instead, [18] showed that in the case of coverage functions, it is possible to efficiently maximize f by lifting the problem to the continuous domain and using stochastic gradient methods on a continuous relaxation to reach a solution that is within a factor (1 − 1 e) of the optimum.
our work provides a general recipe with 1 2 approximation guarantee for problem (1.2) in which f θ 's can be any monotone submodular function.
contrasting
NeurIPS
train_699
To encourage consistency while reusing general purpose solvers, a PN-consistency potential [9] is often incorporated into the model: Hereby c is a positive constant that is tuned to penalize for the violation of consistency.
as c increases, the following constraint holds: usage of PN-potentials raises concerns: (i) increasing the number of pairwise constraints decreases computational efficiency, (ii) enforcing consistency in a soft manner requires tuning of an additional parameter c, (iii) large values of c reduce convergence, and (iv) large values of c result in corresponding beliefs being assigned zero probability mass which is not desirable.
contrasting
NeurIPS