text
stringlengths
70
7.94k
__index_level_0__
int64
105
711k
Title: Mixed-Integer Convex Representability Abstract: Motivated by recent advances in solution methods for mixed-integer convex optimization (MICP), we study the fundamental and open question of which sets can be represented exactly as feasible regions of MICP problems. We establish several results in this direction, including the first complete characterization for the mixed-binary case and a simple necessary condition for the general case. We use the latter to derive the first nonrepresentability results for various nonconvex sets, such as the set of rank-1 matrices and the set of prime numbers. Finally, in correspondence with the seminal work on mixed-integer linear representability by Jeroslow and Lowe, we study the representability question under rationality assumptions. Under these rationality assumptions, we establish that representable sets obey strong regularity properties, such as periodicity, and we provide a complete characterization of representable subsets of the natural numbers and of representable compact sets. Interestingly, in the case of subsets of natural numbers, our results provide a clear separation between the mathematical modeling power of mixed-integer linear and mixed-integer convex optimization. In the case of compact sets, our results imply that using unbounded integer variables is necessary only for modeling unbounded sets.
77,519
Title: Characterizing linear mappings through zero products or zero Jordan products Abstract: Let $${\mathcal {A}}$$ be a $$*$$ -algebra and $${{\mathcal {M}}}$$ be a $$*$$ - $${\mathcal {A}}$$ -bimodule. We study the local properties of $$*$$ -derivations and $$*$$ -Jordan derivations from $${\mathcal {A}}$$ into $${{\mathcal {M}}}$$ under the following orthogonality conditions on elements in $${\mathcal {A}}$$ : $$ab^*=0$$ , $$ab^*+b^*a=0$$ and $$ab^*=b^*a=0$$ . We characterize the mappings on zero product determined algebras and zero Jordan product determined algebras. Moreover, we give some applications on $$C^*$$ -algebras, group algebras, matrix algebras, algebras of locally measurable operators and von Neumann algebras.
77,533
Title: Coloring count cones of planar graphs Abstract: For a plane near-triangulation G with the outer face bounded by a cycle C, let n(G)* denote the function that to each 4-coloring psi of C assigns the number of ways psi extends to a 4-coloring ofG. The Block-count reducibility argument (which has been developed in connection with attempted proofs of the Four Color Theorem) is equivalent to the statement that the function n(G)* belongs to a certain cone in the space of all functions from 4-colorings of C to real numbers. We investigate the properties of this cone for vertical bar C vertical bar = 5, formulate a conjecture strengthening the Four Color Theorem, and present evidence supporting this conjecture.
77,537
Title: Ordinal imitative dynamics Abstract: This paper introduces an evolutionary dynamics based on imitate the better realization (IBR) rule. Under this rule, an agent in a population game imitates the strategy of a randomly chosen opponent whenever the opponent’s realized payoff is higher than his or her own. Such behavior generates a mean dynamics which depends on the order of payoffs, but not their magnitudes, and is polynomial in strategy utilization frequencies. We demonstrate that while the dynamics does not possess Nash stationarity or payoff monotonicity, under it pure strategies iteratively strictly dominated by pure strategies are eliminated and strict equilibria are locally stable. We investigate the relationship between the dynamics based on the IBR rule and the replicator dynamics. In trivial cases, the two dynamics are topologically equivalent. In Rock-Paper-Scissors games we conjecture that both dynamics exhibit the same types of behavior, but the partitions of the game set do not coincide. In other cases, the IBR dynamics exhibits behaviors that are impossible under the replicator dynamics.
77,557
Title: Multiple knapsack-constrained monotone DR-submodular maximization on distributive lattice Abstract: We consider a problem of maximizing a monotone DR-submodular function under multiple order-consistent knapsack constraints on a distributive lattice. Because a distributive lattice is used to represent a dependency constraint, the problem can represent a dependency constrained version of a submodular maximization problem on a set. We propose a ( $$1 - 1/e$$ )-approximation algorithm for this problem. To achieve this result, we generalize the continuous greedy algorithm to distributive lattices: We choose a median complex as a continuous relaxation of the distributive lattice and define the multilinear extension on it. We show that the median complex admits special curves, named uniform linear motions. The multilinear extension of a DR-submodular function is concave along a positive uniform linear motion, which is a key property used in the continuous greedy algorithm.
77,558
Title: Matroid bases with cardinality constraints on the intersection Abstract: Given two matroids $$\mathcal {M}_{1} = (E, \mathcal {B}_{1})$$ and $$\mathcal {M}_{2} = (E, \mathcal {B}_{2})$$ on a common ground set E with base sets $$\mathcal {B}_1$$ and $$\mathcal {B}_2$$ , some integer $$k \in \mathbb {N}$$ , and two cost functions $$c_{1}, c_{2} :E \rightarrow \mathbb {R}$$ , we consider the optimization problem to find a basis $$X \in \mathcal {B}_{1}$$ and a basis $$Y \in \mathcal {B}_{2}$$ minimizing the cost $$\sum _{e\in X} c_1(e)+\sum _{e\in Y} c_2(e)$$ subject to either a lower bound constraint $$|X \cap Y| \le k$$ , an upper bound constraint $$|X \cap Y| \ge k$$ , or an equality constraint $$|X \cap Y| = k$$ on the size of the intersection of the two bases X and Y. The problem with lower bound constraint turns out to be a generalization of the Recoverable Robust Matroid problem under interval uncertainty representation for which the question for a strongly polynomial-time algorithm was left as an open question in Hradovich et al. (J Comb Optim 34(2):554–573, 2017). We show that the two problems with lower and upper bound constraints on the size of the intersection can be reduced to weighted matroid intersection, and thus be solved with a strongly polynomial-time primal-dual algorithm. We also present a strongly polynomial, primal-dual algorithm that computes a minimum cost solution for every feasible size of the intersection k in one run with asymptotic running time equal to one run of Frank’s matroid intersection algorithm. Additionally, we discuss generalizations of the problems from matroids to polymatroids, and from two to three or more matroids. We obtain a strongly polynomial time algorithm for the recoverable robust polymatroid base problem with interval uncertainties.
77,593
Title: Transient Stability of Droop-Controlled Inverter Networks With Operating Constraints Abstract: Due to the rise of distributed energy resources, the control of networks of grid-forming inverters is now a pressing issue for the power system operation. Droop control is a popular control strategy in the literature for frequency control of these inverters. In this article, we analyze transient stability in droop-controlled inverter networks that are subject to multiple operating constraints. Using a physically meaningful Lyapunov-like function, we provide two sets of criteria (one mathematical and one computational) to certify that a postfault trajectory achieves frequency synchronization while respecting operating constraints. We show how to obtain less-conservative transient stability conditions by incorporating information from loop flows, i.e., net flows of active power around cycles in the network. Finally, we use these conditions to quantify the scale of parameter disturbances to which the network is robust. We illustrate our results with numerical case studies of the IEEE 24-bus system.
77,625
Title: The Projection Games Conjecture and the hardness of approximation of super-SAT and related problems Abstract: The Super-SAT (SSAT) problem was introduced in [1], [2] to prove the NP-hardness of approximation of two popular lattice problems - Shortest Vector Problem and Closest Vector Problem. SSAT is conjectured to be NP-hard to approximate to within a factor of nc (c is positive constant, n is the size of the SSAT instance). In this paper we prove this conjecture assuming the Projection Games Conjecture (PGC) [3]. This implies hardness of approximation of these lattice problems within polynomial factors, assuming PGC. We also reduce SSAT to the Nearest Codeword Problem and Learning Halfspace Problem [4]. This proves that both these problems are NP-hard to approximate within a factor of Nc′/log⁡log⁡n (c′ is positive constant, N is the size of the instances of the respective problems). Assuming PGC these problems are proved to be NP-hard to approximate within polynomial factors.
77,627
Title: Revisiting occurrence typing Abstract: We revisit occurrence typing, a technique to refine the type of variables occurring in type-cases and, thus, capture some programming patterns used in untyped languages. Although occurrence typing was tied from its inception to set-theoretic types—union types, in particular—it never fully exploited the capabilities of these types. Here we show how, by using set-theoretic types, it is possible to develop a general typing framework that encompasses and generalizes several aspects of current occurrence typing proposals and that can be applied to tackle other problems such as the reconstruction of intersection types for unannotated or partially annotated functions and the optimization of the compilation of gradually typed languages.
77,629
Title: Number Conservation via Particle Flow in One-dimensional Cellular Automata Abstract: A number-conserving cellular automaton is a simplified model for a system of interacting particles. This paper contains two related constructions by which one can find all one-dimensional number-conserving cellular automata with one kind of particle. The output of both methods is a "flow function", which describes the movement of the particles. In the first method, one puts increasingly stronger restrictions on the particle flow until a single flow function is specified. There are no dead ends, every choice of restriction steps ends with a flow. The second method uses the fact that the flow functions can be ordered and then form a lattice. This method consists of a recipe for the slowest flow that enforces a given minimal particle speed in one given neighbourhood. All other flow functions are then maxima of sets of these flows. Other questions, like that about the nature of non-deterministic number-conserving rules, are treated briefly at the end.
77,653
Title: Uncertainty principles for the windowed offset linear canonical transform Abstract: The windowed offset linear canonical transform (WOLCT) can be identified as a generalization of the windowed linear canonical transform (WLCT). In this paper, we generalize several different uncertainty principles for the WOLCT, including Heisenberg uncertainty principle, Hardy's uncertainty principle, Donoho-Stark's uncertainty principle and Nazarov's uncertainty principle. Finally, as application analogues of the Poisson summation formula and sampling formulas are given.
77,669
Title: Discretized Fast-Slow Systems with Canards in Two Dimensions Abstract: We study the problem of preservation of maximal canards for time discretized fast-slow systems with canard fold points. In order to ensure such preservation, certain favorable structure-preserving properties of the discretization scheme are required. Conventional schemes do not possess such properties. We perform a detailed analysis for an unconventional discretization scheme due to Kahan. The analysis uses the blow-up method to deal with the loss of normal hyperbolicity at the canard point. We show that the structure-preserving properties of the Kahan discretization for quadratic vector fields imply a similar result as in continuous time, guaranteeing the occurrence of maximal canards between attracting and repelling slow manifolds upon variation of a bifurcation parameter. The proof is based on a Melnikov computation along an invariant separating curve, which organizes the dynamics of the map similarly to the ODE problem.
77,677
Title: Cover and variable degeneracy Abstract: Let f be a nonnegative integer valued function on the vertex set of a graph. A graph is strictly f-degenerate if each nonempty subgraph gamma & nbsp;has a vertex v such that deg(gamma)(v) < f (v). In this paper, we define a new concept, strictly f-degenerate transversal, which generalizes list coloring, signed coloring, DP-coloring, L-forested-coloring, and (f(1), f(2), ..., f(s))-partition. A cover of a graph G is a graph H with vertex set V (H) = U-v is an element of V(G) X-v, where X-v = {(v, 1), (v, 2), ..., (v, s)}; the edge set M = U-uv is an element of E(G) M-uv, where M-uv is a matching between X-u and X-v. A vertex set R subset of & nbsp;& nbsp;(H) is a transversal of H if |R & cap;& nbsp;& nbsp;X-v|& nbsp;=1 for each v is an element of & nbsp;V (G). A transversal R is a strictly f-degenerate transversal if H[R] is strictly f-degenerate. The main result of this paper is a degree type result, which generalizes Brooks' theorem, Gallai's theorem, degree-choosable result, signed degree-colorable result, and DP-degree-colorable result. We also give some structural results on critical graphs with respect to strictly f-degenerate transversal. Using these results, we can uniformly prove many new and known results. In the final section, we pose some open problems. (C)& nbsp;2021 Elsevier B.V. All rights reserved.
77,684
Title: A Unified Framework for Problems on Guessing, Source Coding, and Tasks Partitioning. Abstract: We study four problems namely, Campbell's source coding problem, Arikan's guessing problem, Huieihel et al.'s memoryless guessing problem, and Bunte and Lapidoth's task partitioning problem. We observe a close relationship among these problems. In all these problems, the objective is to minimize moments of some functions of random variables, and R\'enyi entropy and Sundaresan's divergence arise as optimal solutions. This motivates us to establish a connection among these four problems. In this paper, we study a more general problem and show that R\'{e}nyi and Shannon entropies arise as its solution. We show that the problems on source coding, guessing and task partitioning are particular instances of this general optimization problem, and derive the lower bounds using this framework. We also refine some known results and present new results for mismatched version of these problems using a unified approach. We strongly feel that this generalization would, in addition to help in understanding the similarities and distinctiveness of these problems, also help to solve any new problem that falls in this framework.
77,699
Title: Selection Heuristics on Semantic Genetic Programming for Classification Problems Abstract: Individual semantics have been used for guiding the learning process of Genetic Programming. Novel genetic operators and different ways of performing parent selection have been proposed with the use of semantics. The latter is the focus of this contribution by proposing three heuristics for parent selection that measure the similarity among individuals' semantics for choosing parents that enhance the addition, Naive Bayes, and Nearest Centroid. To the best of our knowledge, this is the first time that functions' properties are used for guiding the learning process. As the heuristics were created based on the properties of these functions, we apply them only when they are used to create offspring. The similarity functions considered are the cosine similarity, Pearson's correlation, and agreement. We analyze these heuristics' performance against random selection, state-of-the-art selection schemes, and 18 classifiers, including auto-machine-learning techniques, on 30 classification problems with a variable number of samples, variables, and classes. The result indicated that the combination of parent selection based on agreement and random selection to replace an individual in the population produces statistically better results than the classical selection and state-of-the-art schemes, and it is competitive with state-of-the-art classifiers. Finally, the code is released as open-source software.
77,707
Title: Spectrum sensing and resource allocation for 5G heterogeneous cloud radio access networks Abstract: In this paper, the problem of opportunistic spectrum sharing for the next generation of wireless systems empowered by the cloud radio access network (C-RAN) is studied. More precisely, low-priority users employ cooperative spectrum sensing to detect a vacant portion of the spectrum that is not currently used by high-priority users. The authors' aim is to maximize the overall throughput of the low-priority users while guaranteeing the quality of service of the high-priority users. This objective is attained by optimally adjusting spectrum sensing time, with respect to target probabilities of detection and false alarm, as well as dynamically allocating C-RAN resources, that is, powers, sub-carriers, remote radio heads, and base-band units. To solve this problem, which is non-convex and NP-hard, a low-complex iterative solution is proposed. Numerical results demonstrate the necessity of sensing time adjustment as well as effectiveness of the proposed solution.
77,708
Title: Dynamic optimization with side information Abstract: •We develop an approach for incorporating side information into dynamic optimization.•Our approach combines robust optimization with non-parametric machine learning methods.•We prove a new Wasserstein measure concentration result with machine learning.•We use our result to prove that the proposed approach is asymptotically optimal.•Our approach can be tractably approximated using overlapping linear decision rules.
77,716
Title: ${\sf DeepNC}$ DeepNC : Deep Generative Network Completion Abstract: Most network data are collected from partially observable networks with both missing nodes and missing edges, for example, due to limited resources and privacy settings specified by users on social media. Thus, it stands to reason that inferring the missing parts of the networks by performing network completion should precede downstream applications. However, despite this need, th...
77,721
Title: Underexposed Image Correction via Hybrid Priors Navigated Deep Propagation Abstract: Enhancing visual quality for underexposed images is an extensively concerning task that plays an important role in various areas of multimedia and computer vision. Most existing methods often fail to generate high-quality results with appropriate luminance and abundant details. To address these issues, we develop a novel framework, integrating both knowledge from physical principles and implicit distributions from data to address underexposed image correction. More concretely, we propose a new perspective to formulate this task as an energy-inspired model with advanced hybrid priors. A propagation procedure navigated by the hybrid priors is well designed for simultaneously propagating the reflectance and illumination toward desired results. We conduct extensive experiments to verify the necessity of integrating both underlying principles (i.e., with knowledge) and distributions (i.e., from data) as navigated deep propagation. Plenty of experimental results of underexposed image correction demonstrate that our proposed method performs favorably against the state-of-the-art methods on both subjective and objective assessments. In addition, we execute the task of face detection to further verify the naturalness and practical value of underexposed image correction. What is more, we apply our method to solve single-image haze removal whose experimental results further demonstrate our superiorities.
77,723
Title: CONSTRUCTING WADGE CLASSES Abstract: We show that, assuming the Axiom of Determinacy, every non-selfdual Wadge class can be constructed by starting with those of level omega(1) (that is, the ones that are closed under Borel preimages) and iteratively applying the operations of expansion and separated differences. The proof is essentially due to Louveau, and it yields at the same time a new proof of a theorem of Van Wesep (namely, that every non-selfdual Wadge class can be expressed as the result of a Hausdorff operation applied to the open sets). The exposition is self-contained, except for facts from classical descriptive set theory.
77,731
Title: Normal forms for rank two linear irregular differential equations and moduli spaces Abstract: We provide a unique normal form for rank two irregular connections on the Riemann sphere. In fact, we provide a birational model where we introduce apparent singular points and where the bundle has a fixed Birkhoff–Grothendieck decomposition. The essential poles and the apparent poles provide two parabolic structures. The first one only depends on the formal type of the singular points. The latter one determines the connection (accessory parameters). As a consequence, an open set of the corresponding moduli space of connections is canonically identified with an open set of some Hilbert scheme of points on the explicit blow-up of some Hirzebruch surface. This generalizes previous results obtained by Szabó to the irregular case. Our work is more generally related to ideas and descriptions of Oblezin, Dubrovin–Mazzocco, and Saito–Szabó in the logarithmic case. After the first version of this work appeared, Komyo used our normal form to compute isomonodromic Hamiltonian systems for irregular Garnier systems.
77,737
Title: Genome-Wide Causation Studies of Complex Diseases Abstract: Despite significant progress in dissecting the genetic architecture of complex diseases by genome-wide association studies (GWAS), the signals identified by association analysis may not have specific pathological relevance to diseases so that a large fraction of disease-causing genetic variants is still hidden. Association is used to measure dependence between two variables or two sets of variables. GWAS test association between a disease and single-nucleotide polymorphisms (SNPs) (or other genetic variants) across the genome. Association analysis may detect superficial patterns between disease and genetic variants. Association signals provide limited information on the causal mechanism of diseases. The use of association analysis as a major analytical platform for genetic studies of complex diseases is a key issue that may hamper discovery of disease mechanisms, calling into the questions the ability of GWAS to identify loci-underlying diseases. It is time to move beyond association analysis toward techniques, which enables the discovery of the underlying causal genetic structures of complex diseases. To achieve this, we propose the concept of genome-wide causation studies (GWCS) as an alternative to GWAS and develop additive noise models (ANMs) for genetic causation analysis. Type 1 error rates and power of the ANMs in testing causation are presented. We conducted GWCS of schizophrenia. Both simulation and real data analysis show that the proportion of the overlapped association and causation signals is small. Thus, we anticipate that our analysis will stimulate serious discussion of the applicability of GWAS and GWCS.
77,744
Title: DEGREES OF RANDOMIZED COMPUTABILITY Abstract: In this survey we discuss work of Levin and V'yugin on collections of sequences that are non-negligible in the sense that they can be computed by a probabilistic algorithm with positive probability. More precisely, Levin and V'yugin introduced an ordering on collections of sequences that are closed under Turing equivalence. Roughly speaking, given two such collections A and B, A is below B in this ordering if A\ B is negligible. The degree structure associated with this ordering, the Levin-V'yugin degrees (or LV-degrees), can be shown to be a Boolean algebra, and in fact a measure algebra. We demonstrate the interactions of this work with recent results in computability theory and algorithmic randomness: First, we recall the definition of the Levin-V'yugin algebra and identify connections between its properties and classical properties from computability theory. In particular, we apply results on the interactions between notions of randomness and Turing reducibility to establish new facts about specific LV-degrees, such as the LV-degree of the collection of 1-generic sequences, that of the collection of sequences of hyperimmune degree, and those collections corresponding to various notions of effective randomness. Next, we provide a detailed explanation of a complex technique developed by V'yugin that allows the construction of semi-measures into which computability-theoretic properties can be encoded. We provide two examples of the use of this technique by explicating a result of V'yugin's about the LV-degree of the collection of Martin-Lof random sequences and extending the result to the LV-degree of the collection of sequences of DNC degree.
77,746
Title: EEG-Based Emotion Recognition Using Regularized Graph Neural Networks Abstract: Electroencephalography (EEG) measures the neuronal activities in different brain regions via electrodes. Many existing studies on EEG-based emotion recognition do not fully exploit the topology of EEG channels. In this article, we propose a regularized graph neural network (RGNN) for EEG-based emotion recognition. RGNN considers the biological topology among different brain regions to capture both local and global relations among different EEG channels. Specifically, we model the inter-channel relations in EEG signals via an adjacency matrix in a graph neural network where the connection and sparseness of the adjacency matrix are inspired by neuroscience theories of human brain organization. In addition, we propose two regularizers, namely node-wise domain adversarial training (NodeDAT) and emotion-aware distribution learning (EmotionDL), to better handle cross-subject EEG variations and noisy labels, respectively. Extensive experiments on two public datasets, SEED, and SEED-IV, demonstrate the superior performance of our model than state-of-the-art models in most experimental settings. Moreover, ablation studies show that the proposed adjacency matrix and two regularizers contribute consistent and significant gain to the performance of our RGNN model. Finally, investigations on the neuronal activities reveal important brain regions and inter-channel relations for EEG-based emotion recognition.
77,748
Title: Cluster deletion revisited Abstract: •We study the Cluster Deletion problem and give an O⁎(1.404k)-time algorithm for this problem.•Our result improves the result of Böcker and Damaschke for this problem who gave an O⁎(1.415k)-time algorithm.•The analysis of the algorithm in the paper of Böcker and Damaschke has an error, which we fix in this paper.
77,781
Title: Spanning Structures in Walker-Breaker Games Abstract: We study the biased (2 : b) Walker-Breaker games, played on the edge set of the complete graph on n vertices, K-n. These games are a variant of the Maker-Breaker games with the restriction that Walker (playing the role of Maker) has to choose her edges according to a walk. We look at the two standard graph games - the Connectivity game and the Hamilton Cycle game and show that Walker can win both games even when playing against Breaker whose bias is of the order of magnitude n/ln n.
77,783
Title: A Practical Fixed-Parameter Algorithm for Constructing Tree-Child Networks from Multiple Binary Trees Abstract: We present the first fixed-parameter algorithm for constructing a tree-child phylogenetic network that displays an arbitrary number of binary input trees and has the minimum number of reticulations among all such networks. The algorithm uses the recently introduced framework of cherry picking sequences and runs in $$O((8k)^k \mathrm {poly}(n, t))$$ time, where n is the number of leaves of every tree, t is the number of trees, and k is the reticulation number of the constructed network. Moreover, we provide an efficient parallel implementation of the algorithm and show that it can deal with up to 100 input trees on a standard desktop computer, thereby providing a major improvement over previous phylogenetic network construction methods.
77,785
Title: Random cographs: Brownian graphon limit and asymptotic degree distribution Abstract: We consider uniform random cographs (either labeled or unlabeled) of large size. Our first main result is the convergence toward a Brownian limiting object in the space of graphons. We then show that the degree of a uniform random vertex in a uniform cograph is of order n, and converges after normalization to the Lebesgue measure on [0,1]. We finally analyze the vertex connectivity (i.e., the minimal number of vertices whose removal disconnects the graph) of random connected cographs, and show that this statistics converges in distribution without renormalization. Unlike for the graphon limit and for the degree of a random vertex, the limiting distribution of the vertex connectivity is different in the labeled and unlabeled settings. Our proofs rely on the classical encoding of cographs via cotrees. We then use mainly combinatorial arguments, including the symbolic method and singularity analysis.
77,788
Title: Measuring the local non-convexity of real algebraic curves Abstract: The goal of this paper is to measure the non-convexity of compact and smooth connected components of real algebraic plane curves. We study these curves first in a general setting and then in an asymptotic one. In particular, we consider sufficiently small levels of a real bivariate polynomial in a small enough neighbourhood of a strict local minimum at the origin of the real affine plane. We introduce and describe a new combinatorial object, called the Poincaré-Reeb graph, whose role is to encode the shape of such curves and allow us to quantify their non-convexity. Moreover, we prove that in this setting the Poincaré-Reeb graph is a plane tree and can be used as a tool to study the asymptotic behaviour of level curves near a strict local minimum. Finally, using the real polar curve, we show that locally the shape of the levels stabilises and that no spiralling phenomena occur near the origin.
77,791
Title: Fast approximation of orthogonal matrices and application to PCA Abstract: Orthogonal projections are a standard technique of dimensionality reduction in machine learning applications. We study the problem of approximating orthogonal matrices so that their application is numerically fast and yet accurate. We find an approximation by solving an optimization problem over a set of structured matrices, that we call extended orthogonal Givens transformations, including Givens rotations as a special case. We propose an efficient greedy algorithm to solve such a problem and show that it strikes a balance between approximation accuracy and speed of computation. The approach is relevant to spectral methods and we illustrate its application to PCA.
77,795
Title: Semidefinite Programming Relaxations of the Traveling Salesman Problem and Their Integrality Gaps Abstract: The traveling salesman problem (TSP) is a fundamental problem in combinatorial optimization. Several semidefinite programming relaxations have been proposed recently that exploit a variety of mathematical structures including, for example, algebraic connectivity, permutation matrices, and association schemes. The main results of this paper are twofold. First, de Klerk and Sotirov [de Klerk E, Sotirov R (2012) Improved semidefinite programming bounds for quadratic assignment problems with suitable symmetry. Math. Programming 133(1):75-91.] present a semidefinite program (SDP) based on permutation matrices and symmetry reduction; they show that it is incomparable to the subtour elimination linear program but generally dominates it on small instances. We provide a family of simplicial TSP instances that shows that the integrality gap of this SDP is unbounded. Second, we show that these simplicial TSP instances imply the unbounded integrality gap of every SDP relaxation of the TSP mentioned in the survey on SDP relaxations of the TSP in section 2 of Sotirov [Sotirov R (2012) SDP relaxations for some combinatorial optimization problems. Anjos MF, Lasserre JB, eds., Handbook on Semi-definite, Conic and Polynomial Optimization (Springer, New York), 795-819.]. In contrast, the subtour linear program performs perfectly on simplicial instances. The simplicial instances thus form a natural litmus test for future SDP relaxations of the TSP.
77,812
Title: Deep learning for time series forecasting: The electric load case Abstract: Management and efficient operations in critical infrastructures such as smart grids take huge advantage of accurate power load forecasting, which, due to its non-linear nature, remains a challenging task. Recently, deep learning has emerged in the machine learning field achieving impressive performance in a vast range of tasks, from image classification to machine translation. Applications of deep learning models to the electric load forecasting problem are gaining interest among researchers as well as the industry, but a comprehensive and sound comparison among different-also traditional-architectures is not yet available in the literature. This work aims at filling the gap by reviewing and experimentally evaluating four real world datasets on the most recent trends in electric load forecasting, by contrasting deep learning architectures on short-term forecast (one-day-ahead prediction). Specifically, the focus is on feedforward and recurrent neural networks, sequence-to-sequence models and temporal convolutional neural networks along with architectural variants, which are known in the signal processing community but are novel to the load forecasting one.
77,814
Title: Permutation binomials over finite fields Abstract: Let F-q denote the finite field with q elements. In this paper we use the relation between suitable polynomials and number of rational points on algebraic curves to give the exact number of elements a is an element of F-q for which the binomial x(n)(x(q-1/3) + a) is a permutation polynomial. In order to do this, we employ results on the Cartin-Manin operator and the Riemann Hypothesis for elliptic curves to present the exact number of points on a suitable elliptic curve. (c) 2021 Elsevier B.V. All rights reserved.
77,824
Title: Examples of weak amalgamation classes Abstract: We present several examples of hereditary classes of finite structures satisfying the joint embedding property and the weak amalgamation property, but failing the cofinal amalgamation property. These include a continuum-sized family of classes of finite undirected graphs, as well as an example due to Pouzet with countably categorical generic limit. (c) 2022 Wiley-VCH GmbH
77,835
Title: Highlight Every Step: Knowledge Distillation via Collaborative Teaching Abstract: High storage and computational costs obstruct deep neural networks to be deployed on resource-constrained devices. Knowledge distillation (KD) aims to train a compact student network by transferring knowledge from a larger pretrained teacher model. However, most existing methods on KD ignore the valuable information among the training process associated with training results. In this article, we p...
77,843
Title: Dynamic Facial Expression Generation on Hilbert Hypersphere With Conditional Wasserstein Generative Adversarial Nets Abstract: In this work, we propose a novel approach for generating videos of the six basic facial expressions given a neutral face image. We propose to exploit the face geometry by modeling the facial landmarks motion as curves encoded as points on a hypersphere. By proposing a conditional version of manifold-valued Wasserstein generative adversarial network (GAN) for motion generation on the hypersphere, w...
77,854
Title: Sparse optimization on measures with over-parameterized gradient descent Abstract: Minimizing a convex function of a measure with a sparsity-inducing penalty is a typical problem arising, e.g., in sparse spikes deconvolution or two-layer neural networks training. We show that this problem can be solved by discretizing the measure and running non-convex gradient descent on the positions and weights of the particles. For measures on a d-dimensional manifold and under some non-degeneracy assumptions, this leads to a global optimization algorithm with a complexity scaling as $$\log (1/\epsilon )$$ in the desired accuracy $$\epsilon $$ , instead of $$\epsilon ^{-d}$$ for convex methods. The key theoretical tools are a local convergence analysis in Wasserstein space and an analysis of a perturbed mirror descent in the space of measures. Our bounds involve quantities that are exponential in d which is unavoidable under our assumptions.
77,860
Title: Accurate Angular Inference for 802.11ad Devices Using Beam-Specific Measurements Abstract: Due to their sparsity, 60GHz channels are characterized by a few dominant paths. Knowing the angular information of their dominant paths, we can develop various applications, such as the prediction of link performance and the tracking of 802.11ad devices. Although they are equipped with phased arrays, the angular inference for 802.11ad devices is still challenging due to their limited number of RF chains and limited phase control capabilities. Considering the beam sweeping operation and the high communication bandwidth of 802.11ad devices, we propose variation-based angle estimation (VAE), called <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">VAE-CIR</monospace> , by utilizing beam-specific channel impulse responses (CIRs) measured under different beams and the directional gains of the corresponding beams to infer the angular information of dominant paths. Unlike state-of-the-arts, <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">VAE-CIR</monospace> exploits the variations between different beam-specific CIRs for angular inference and provides a performance guarantee in the high signal-to-noise-ratio regime. To evaluate <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">VAE-CIR</monospace> , we generate the beam-specific CIRs by simulating the beam sweeping of 802.11ad devices with the beam patterns measured on off-the-shelf 802.11ad devices. The 60GHz channel is generated via a ray-tracing-based simulator and the CIRs are extracted via channel estimation based on Golay sequences. Through extensive experiments, <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">VAE-CIR</monospace> is shown to achieve more accurate angle estimation than existing schemes.
77,873
Title: A Nonconvex Optimization Approach to IMRT Planning with Dose-Volume Constraints Abstract: Fluence map optimization for intensity-modulated radiation therapy planning can be formulated as a large-scale inverse problem with multi-objectives on the tumors and organs-at-risk. Unfortunately, clinically relevant dose-volume constraints are nonconvex, so convex formulations and algorithms cannot be directly applied to the problem. We propose a novel approach to handle dose-volume constraints while preserving their nonconvexity, as opposed to previous efforts which focused on iterative convexification. The proposed method is amenable to efficient algorithms based on partial minimization and naturally adapts to handle maximum and mean dose constraints, which are prevalent in current practice, and cases of infeasibility. We demonstrate our approach using the CORT dataset, and show that it is easily adaptable to radiation treatment planning with dose-volume constraints for multiple tumors and organs-at-risk.
77,880
Title: BSL: An R Package for Efficient Parameter Estimation for Simulation-Based Models via Bayesian Synthetic Likelihood Abstract: Bayesian synthetic likelihood (BSL; Price, Drovandi, Lee, and Nott 2018) is a popular method for estimating the parameter posterior distribution for complex statistical models and stochastic processes that possess a computationally intractable likelihood function. Instead of evaluating the likelihood, BSL approximates the likelihood of a judiciously chosen summary statistic of the data via model simulation and density estimation. Compared to alternative methods such as approximate Bayesian computation (ABC), BSL requires little tuning and requires less model simulations than ABC when the chosen summary statistic is high-dimensional. The original synthetic likelihood relies on a multivariate normal approximation of the intractable likelihood, where the mean and covariance are estimated by simulation. An extension of BSL considers replacing the sample covariance with a penalized covariance estimator to reduce the number of required model simulations. Further, a semi-parametric approach has been developed to relax the normality assumption. Finally, another extension of BSL aims to develop a more robust synthetic likelihood estimator while acknowledging there might be model misspecification. In this paper, we present the R package BSL that amalgamates the aforementioned methods and more into a single, easy-to-use and coherent piece of software. The package also includes several examples to illustrate use of the package and the utility of the methods.
77,891
Title: Edge-partitioning 3-edge-connected graphs into paths Abstract: We show that for every ℓ, there exists dℓ such that every 3-edge-connected graph with minimum degree dℓ can be edge-partitioned into paths of length ℓ (provided that its number of edges is divisible by ℓ). This improves a result asserting that 24-edge-connectivity and high minimum degree provides such a partition. This is best possible as 3-edge-connectivity cannot be replaced by 2-edge connectivity.
77,908
Title: On the extremal function for graph minors Abstract: For a graph H, let c (H) = inf{c : e (G) >= c|G| implies G. H}, where G > H means that H is a minor ofG. We show that if H has average degree d, then c (H) <= (0.319 ... + o(d)(1))|H|root log d where 0.319... is an explicitly defined constant. This bound matches a corresponding lower bound shown to hold for almost all such H by Norin, Reed, Wood and the first author.
77,910
Title: Adaptive Flight Control in the Presence of Limits on Magnitude and Rate Abstract: Input constraints as well as parametric uncertainties must be accounted for in the design of safe control systems. This paper presents an adaptive controller for multiple-input-multiple-output (MIMO) plants with input magnitude and rate saturation in the presence of parametric uncertainties. A filter is introduced in the control path to accommodate the presence of rate limits. An output feedback adaptive controller is designed to stabilize the closed loop system even in the presence of this filter. The overall control architecture includes adaptive laws that are modified to account for the magnitude and rate limits. Analytical guarantees of stable adaptation, bounded trajectories, and satisfactory tracking are provided. Three flight control simulations with nonlinear models of the aircraft dynamics are provided to demonstrate the efficacy of the proposed adaptive controller for open loop stable and unstable systems in the presence of uncertainties in the dynamics as well as input magnitude and rate saturation.
77,922
Title: ChaLearn Looking at People: IsoGD and ConGD Large-Scale RGB-D Gesture Recognition Abstract: The ChaLearn large-scale gesture recognition challenge has run twice in two workshops in conjunction with the International Conference on Pattern Recognition (ICPR) 2016 and International Conference on Computer Vision (ICCV) 2017, attracting more than 200 teams around the world. This challenge has two tracks, focusing on isolated and continuous gesture recognition, respectively. It describes the creation of both benchmark datasets and analyzes the advances in large-scale gesture recognition based on these two datasets. In this article, we discuss the challenges of collecting large-scale ground-truth annotations of gesture recognition and provide a detailed analysis of the current methods for large-scale isolated and continuous gesture recognition. In addition to the recognition rate and mean Jaccard index (MJI) as evaluation metrics used in previous challenges, we introduce the corrected segmentation rate (CSR) metric to evaluate the performance of temporal segmentation for continuous gesture recognition. Furthermore, we propose a bidirectional long short-term memory (Bi-LSTM) method, determining video division points based on skeleton points. Experiments show that the proposed Bi-LSTM outperforms state-of-the-art methods with an absolute improvement of 8.1% (from 0.8917 to 0.9639) of CSR.
77,934
Title: A Mathematical Model for Universal Semantics Abstract: We characterize the meaning of words with language-independent numerical fingerprints, through a mathematical analysis of recurring patterns in texts. Approximating texts by Markov processes on a long-range time scale, we are able to extract topics, discover synonyms, and sketch semantic fields from a particular document of moderate length, without consulting external knowledge-base or thesaurus. Our Markov semantic model allows us to represent each topical concept by a low-dimensional vector, interpretable as algebraic invariants in succinct statistical operations on the document, targeting local environments of individual words. These language-independent semantic representations enable a robot reader to both understand short texts in a given language (automated question-answering) and match medium-length texts across different languages (automated word translation). Our semantic fingerprints quantify local meaning of words in 14 representative languages across five major language families, suggesting a universal and cost-effective mechanism by which human languages are processed at the semantic level. Our protocols and source codes are publicly available on <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/yajun-zhou/linguae-naturalis-principia-mathematica</uri> .
77,940
Title: Consensus Feature Network for Scene Parsing. Abstract: Scene parsing is challenging as it aims to assign one of the semantic categories to each pixel in scene images. Thus, pixel-level features are desired for scene parsing. However, classification networks are dominated by the discriminative portion, so directly applying classification networks to scene parsing will result in inconsistent parsing predictions within one instance and among instances of the same category. To address this problem, we propose two transform units to learn pixel-level consensus features. One is an Instance Consensus Transform (ICT) unit to learn the instance-level consensus features by aggregating features within the same instance. The other is a Category Consensus Transform (CCT) unit to pursue category-level consensus features through keeping the consensus of features among instances of the same category in scene images. The proposed ICT and CCT units are lightweight, data-driven and end-to-end trainable. The features learned by the two units are more coherent in both instance-level and category-level. Furthermore, we present the Consensus Feature Network (CFNet) based on the proposed ICT and CCT units. Experiments on four scene parsing benchmarks, including Cityscapes, Pascal Context, CamVid, and COCO Stuff, show that the proposed CFNet learns pixel-level consensus feature and obtain consistent parsing results.
77,944
Title: Integer Programming, Constraint Programming, and Hybrid Decomposition Approaches to Discretizable Distance Geometry Problems Abstract: Given an integer dimension K and a simple, undirected graph G with positive edge weights, the Distance Geometry Problem (DGP) aims to find a realization function mapping each vertex to a coordinate in R-K such that the distance between pairs of vertex coordinates is equal to the corresponding edge weights in G. The so-called discretization assumptions reduce the search space of the realization to a finite discrete one, which can be explored via the branch-and-prune (BP) algorithm. Given a discretization vertex order in G, the BP algorithm constructs a binary tree where the nodes at a layer provide all possible coordinates of the vertex corresponding to that layer. The focus of this paper is on finding optimal BP trees for a class of discretizable DGPs. More specifically, we aim to find a discretization vertex order in G that yields a BP tree with the least number of branches. We propose an integer programming formulation and three constraint programming formulations that all significantly outperform the state-of-the-art cutting-plane algorithm for this problem. Moreover, motivated by the difficulty in solving instances with a large and low-density input graph, we develop two hybrid decomposition algorithms, strengthened by a set of valid inequalities, which further improve the solvability of the problem. Summary of Contribution: We present a new model to solve a combinatorial optimization problem on graphs, MIN DOUBLE, which comes from the highly active area of distance geometry and has applications in a wide variety of fields. We use integer programming (IP) and present the first constraint programming(CP) models and hybrid decomposition methods, implemented as a branch-and-cut procedure, for MIN DOUBLE. Through an extensive computational study, we show that our approaches advance the state of the art for MIN DOUBLE. We accomplish this by not only combining generic techniques from IP and CP but also exploring the structure of the problem in developing valid inequalities and variable fixing rules. Our methods significantly improve the solvability of MIN DOUBLE, which we believe can also provide insights for tackling other problem classes and applications.
77,949
Title: Augmented reality instructions for construction toys enabled by accurate model registration and realistic object/hand occlusions Abstract: BRICKxAR is a novel augmented reality (AR) instruction method for construction toys such as LEGO®. With BRICKxAR, physical LEGO construction is guided by virtual bricks. Compared with the state of the art, accuracy of the virtual–physical model alignment is significantly improved through a new design of marker-based registration, which can achieve an average error less than 1 mm throughout the model. Realistic object occlusion is accomplished to reveal the true spatial relationship between physical and virtual bricks. LEGO players’ hand detection and occlusion are realized to visualize the correct spatial relationship between real hands and virtual bricks, and allow virtual bricks to be “grasped” by real hands. The major finding of the research is that the integration of these features makes AR instructions possible for small parts assembly, validated through a working AR prototype for constructing LEGO Arc de Triomphe and quantitative measures of the accuracies of registration and occlusions. In addition, a heuristic evaluation of BRICKxAR’s features has led to findings that the present method could advance AR instructions in terms of enhancing part visibility, match between mental models and visualization, alignment of physical and virtual parts in perspective views and spatial transformations, tangible user interface, consolidated structural diagrams, virtual cutaway views, among other benefits for guiding construction.
77,951
Title: Approximate Capture in Gromov-Hausdorff Close Spaces Abstract: This paper addresses the robustness of the capture radii with respect to perturbation in the phase space on the example of the so-called Lion and Man game. This is a two-person pursuit-evasion game with equal players' top speeds. The existence of alpha-capture by a time T in one compact geodesic space is proved to yield the existence of (alpha + (20T + 8)root delta)-capture by the time T in any compact geodesic space that is delta-close to the given one. In this way, a pursuer's strategy in one space is transferred to another space that is close to the given one in the sense of the Gromov-Hausdorff distance. It means that the capture radii (in similar spaces) tend to the given one as the distance between spaces tends to zero. In particular, this result justifies the consideration of Lion and Man game on the finite metric graphs instead of complicated original spaces.
77,954
Title: Data-Driven Identification of Dissipative Linear Models for Nonlinear Systems Abstract: We consider the problem of identifying a dissipative linear model of an unknown nonlinear system that is known to be dissipative, from time-domain input–output data. We first learn an approximate linear model of the nonlinear system using standard system identification techniques and then perturb the system matrices of the linear model to enforce dissipativity, while closely approximating the dynamical behavior of the nonlinear system. Further, we provide an analytical relationship between the size of the perturbation and the radius in which the dissipativity of the linear model guarantees local dissipativity of the unknown nonlinear system. We demonstrate the application of this identification technique through two examples.
77,959
Title: On certain polynomial systems involving Stirling numbers of second kind Abstract: We solve a special type of linear systems with coefficients in multivariate polynomial rings. These systems arise in the computation of b-functions with respect to weights of certain hypergeometric ideals in the Weyl algebra.
77,961
Title: Multikernel Capsule Network for Schizophrenia Identification Abstract: Schizophrenia seriously affects the quality of life. To date, both simple (e.g., linear discriminant analysis) and complex (e.g., deep neural network) machine-learning methods have been utilized to identify schizophrenia based on functional connectivity features. The existing simple methods need two separate steps (i.e., feature extraction and classification) to achieve the identification, which disables simultaneous tuning for the best feature extraction and classifier training. The complex methods integrate two steps and can be simultaneously tuned to achieve optimal performance, but these methods require a much larger amount of data for model training. To overcome the aforementioned drawbacks, we proposed a multikernel capsule network (MKCapsnet), which was developed by considering the brain anatomical structure. Kernels were set to match partition sizes of the brain anatomical structure in order to capture interregional connectivities at the varying scales. With the inspiration of the widely used dropout strategy in deep learning, we developed capsule dropout in the capsule layer to prevent overfitting of the model. The comparison results showed that the proposed method outperformed the state-of-the-art methods. Besides, we compared performances using different parameters and illustrated the routing process to reveal characteristics of the proposed method. MKCapsnet is promising for schizophrenia identification. Our study first utilized the capsule neural network for analyzing functional connectivity of magnetic resonance imaging (MRI) and proposed a novel multikernel capsule structure with the consideration of brain anatomical parcellation, which could be a new way to reveal brain mechanisms. In addition, we provided useful information in the parameter setting, which is informative for further studies using a capsule network for other neurophysiological signal classification.
77,968
Title: Distributed Resource Allocation Over Time-Varying Balanced Digraphs With Discrete-Time Communication Abstract: This work is concerned with the problem of distributed resource allocation in continuous-time setting but with discrete-time communication over infinitely jointly connected and balanced digraphs. We provide a passivity-based perspective for the continuous-time algorithm, based on which an intermittent communication scheme is developed. Particularly, a periodic communication scheme is first derived through analyzing the passivity degradation over output sampling of the distributed dynamics at each node. Then, an asynchronous distributed event-triggered scheme is further developed. The sampled-based event-triggered communication scheme is exempt from Zeno behavior as the minimum interevent time is lower bounded by the sampling period. The parameters in the proposed algorithm rely only on local information of each individual node, which can be designed in a truly distributed fashion.
77,976
Title: Constraint programming approaches for the discretizable molecular distance geometry problem Abstract: The Distance Geometry Problem (DGP) seeks to find positions for a set of points in geometric space when some distances between pairs of these points are known. The so-called discretization assumptions allow us to discretize the search space of DGP instances. In this paper, we focus on a key subclass of DGP, namely the Discretizable Molecular DGP, and study its associated graph vertex ordering problem, the Contiguous Trilateration Ordering Problem (CTOP), which helps solve DGP. We propose the first constraint programming formulations for CTOP, as well as a set of checks for proving infeasibility, domain reduction techniques, symmetry breaking constraints, and valid inequalities. Our computational results on random and pseudo-protein instances indicate that our formulations outperform the state-of-the-art integer programming formulations.
78,004
Title: Machine Learning at the Network Edge: A Survey Abstract: AbstractResource-constrained IoT devices, such as sensors and actuators, have become ubiquitous in recent years. This has led to the generation of large quantities of data in real-time, which is an appealing target for AI systems. However, deploying machine learning models on such end-devices is nearly impossible. A typical solution involves offloading data to external computing systems (such as cloud servers) for further processing but this worsens latency, leads to increased communication costs, and adds to privacy concerns. To address this issue, efforts have been made to place additional computing devices at the edge of the network, i.e., close to the IoT devices where the data is generated. Deploying machine learning systems on such edge computing devices alleviates the above issues by allowing computations to be performed close to the data sources. This survey describes major research efforts where machine learning systems have been deployed at the edge of computer networks, focusing on the operational aspects including compression techniques, tools, frameworks, and hardware used in successful applications of intelligent edge systems.
78,006
Title: A model of random industrial SAT Abstract: One of the most studied models of SAT is random SAT. In this model, instances are composed from clauses chosen uniformly randomly and independently of each other. This model may be unsatisfactory in that it fails to describe various features of SAT instances, arising in real-world applications. Various modifications have been suggested to define models of industrial SAT. Here, we focus mainly on the aspect of community structure. Namely, here the set of variables consists of a number of disjoint communities, and clauses tend to consist of variables from the same community. Thus, we suggest a model of random industrial SAT, in which the central generalization with respect to random SAT is the additional community structure. There has been a lot of work on the satisfiability threshold of random k-SAT, starting with the calculation of the threshold of 2-SAT, up to the recent result that the threshold exists for sufficiently large k. In this paper, we endeavor to study the satisfiability threshold for the proposed model of random industrial SAT. Our main result is that the threshold in this model tends to be smaller than its counterpart for random SAT. Moreover, under some conditions, this threshold even vanishes. (C) 2022 Elsevier B.V. All rights reserved.
78,007
Title: Hypergraph Based Berge Hypergraphs Abstract: Fix a hypergraph F. A hypergraph H is called a Berge copy of F or Berge-F if we can choose a subset of each hyperedge of H to obtain a copy of F. A hypergraph H is Berge-F-free if it does not contain a subhypergraph which is Berge copy of F. This is a generalization of the usual, graph-based Berge hypergraphs, where F is a graph. In this paper, we study extremal properties of hypergraph based Berge hypergraphs and generalize several results from the graph-based setting. In particular, we show that for any r-uniform hypergraph F, the sum of the sizes of the hyperedges of a (not necessarily uniform) Berge-F-free hypergraph H on n vertices is o(n(r)) when all the hyperedges of H are large enough. We also give a connection between hypergraph based Berge hypergraphs and generalized hypergraph Turan problems.
78,008
Title: Updating Variational Bayes: fast sequential posterior inference Abstract: Variational Bayesian (VB) methods produce posterior inference in a time frame considerably smaller than traditional Markov Chain Monte Carlo approaches. Although the VB posterior is an approximation, it has been shown to produce good parameter estimates and predicted values when a rich classes of approximating distributions are considered. In this paper, we propose the use of recursive algorithms to update a sequence of VB posterior approximations in an online, time series setting, with the computation of each posterior update requiring only the data observed since the previous update. We show how importance sampling can be incorporated into online variational inference allowing the user to trade accuracy for a substantial increase in computational speed. The proposed methods and their properties are detailed in two separate simulation studies. Additionally, two empirical illustrations are provided, including one where a Dirichlet Process Mixture model with a novel posterior dependence structure is repeatedly updated in the context of predicting the future behaviour of vehicles on a stretch of the US Highway 101.
78,014
Title: A Hessenberg-type algorithm for computing PageRank Problems Abstract: PageRank is a widespread model for analysing the relative relevance of nodes within large graphs arising in several applications. In the current paper, we present a cost-effective Hessenberg-type method built upon the Hessenberg process for the solution of difficult PageRank problems. The new method is very competitive with other popular algorithms in this field, such as Arnoldi-type methods, especially when the damping factor is close to 1 and the dimension of the search subspace is large. The convergence and the complexity of the proposed algorithm are investigated. Numerical experiments are reported to show the efficiency of the new solver for practical PageRank computations.
78,016
Title: Adversarial Camera Alignment Network for Unsupervised Cross-Camera Person Re-Identification Abstract: In person re-identification (Re-ID), supervised methods usually need a large amount of expensive label information, while unsupervised ones are still unable to deliver satisfactory identification performance. In this paper, we introduce a novel person Re-ID task called unsupervised cross-camera person Re-ID, which only needs the within-camera (intra-camera) label information but not cross-camera (...
78,043
Title: The Power of the Weighted Sum Scalarization for Approximating Multiobjective Optimization Problems Abstract: We determine the power of the weighted sum scalarization with respect to the computation of approximations for general multiobjective minimization and maximization problems. Additionally, we introduce a new multi-factor notion of approximation that is specifically tailored to the multiobjective case and its inherent trade-offs between different objectives. For minimization problems, we provide an efficient algorithm that computes an approximation of a multiobjective problem by using an exact or approximate algorithm for its weighted sum scalarization. In case that an exact algorithm for the weighted sum scalarization is used, this algorithm comes arbitrarily close to the best approximation quality that is obtainable by supported solutions – both with respect to the common notion of approximation and with respect to the new multi-factor notion. Moreover, the algorithm yields the currently best approximation results for several well-known multiobjective minimization problems. For maximization problems, however, we show that a polynomial approximation guarantee can, in general, not be obtained in more than one of the objective functions simultaneously by supported solutions.
78,067
Title: Computing the Inverse Geodesic Length in Planar Graphs and Graphs of Bounded Treewidth Abstract: AbstractThe inverse geodesic length of a graph G is the sum of the inverse of the distances between all pairs of distinct vertices of G. In some domains, it is known as the Harary index or the global efficiency of the graph. We show that, if G is planar and has n vertices, then the inverse geodesic length of G can be computed in roughly O(n9/5) time. We also show that, if G has n vertices and treewidth at most k, then the inverse geodesic length of G can be computed in O(n log O(k)n) time. In both cases, we use techniques developed for computing the sum of the distances, which does not have “inverse” component, together with batched evaluations of rational functions.
78,076
Title: The Fault-Tolerant Cluster-Sending Problem. Abstract: The development of fault-tolerant distributed systems that can tolerate Byzantine behavior has traditionally been focused on consensus protocols, which support fully-replicated designs. For the development of more sophisticated high-performance Byzantine distributed systems, more specialized fault-tolerant communication primitives are necessary, however. In this paper, we identify an essential communication primitive and study it in depth. In specifics, we formalize the cluster-sending problem, the problem of sending a message from one Byzantine cluster to another Byzantine cluster in a reliable manner. We not only formalize this fundamental problem, but also establish lower bounds on the complexity of this problem under crash failures and Byzantine failures. Furthermore, we develop practical cluster-sending protocols that meet these lower bounds and, hence, have optimal complexity. As such, our work provides a strong foundation for the further exploration of novel designs that address challenges encountered in fault-tolerant distributed systems.
78,080
Title: Reusability and Transferability of Macro Actions for Reinforcement Learning Abstract: AbstractConventional reinforcement learning (RL) typically determines an appropriate primitive action at each timestep. However, by using a proper macro action, defined as a sequence of primitive actions, an RL agent is able to bypass intermediate states to a farther state and facilitate its learning procedure. The problem we would like to investigate is what associated beneficial properties that macro actions may possess. In this article, we unveil the properties of reusability and transferability of macro actions. The first property, reusability, means that a macro action derived along with one RL method can be reused by another RL method for training, while the second one, transferability, indicates that a macro action can be utilized for training agents in similar environments with different reward settings. In our experiments, we first derive macro actions along with RL methods. We then provide a set of analyses to reveal the properties of reusability and transferability of the derived macro actions.
78,082
Title: ‘Sometime a paradox’, now proof: Yablo is not first order Abstract: Interesting as they are by themselves in philosophy and mathematics, paradoxes can be made even more fascinating when turned into proofs and theorems. For example, Russell’s paradox, which overthrew Frege’s logical edifice, is now a classical theorem in set theory, to the effect that no set contains all sets. Paradoxes can be used in proofs of some other theorems—thus Liar’s paradox has been used in the classical proof of Tarski’s theorem on the undefinability of truth in sufficiently rich languages. This paradox (as well as Richard’s paradox) appears implicitly in Gödel’s proof of his celebrated first incompleteness theorem. In this paper, we study Yablo’s paradox from the viewpoint of first- and second-order logics. We prove that a formalization of Yablo’s paradox (which is second order in nature) is non-first-orderizable in the sense of George Boolos (1984).
78,083
Title: Efficient Fair Division with Minimal Sharing. Abstract: A collection of objects, some of which are good and some are bad, is to be divided fairly among agents with different tastes, modeled by additive utility functions. If the objects cannot be shared, so that each of them must be entirely allocated to a single agent, then a fair division may not exist. What is the smallest number of objects that must be shared between two or more agents in order to attain a fair and efficient division? In this paper, fairness is understood as proportionality or envy-freeness, and efficiency, as fractional Pareto-optimality. We show that, for a generic instance of the problem (all instances except a zero-measure set of degenerate problems), a fair fractionally Pareto-optimal division with the smallest possible number of shared objects can be found in polynomial time, assuming that the number of agents is fixed. The problem becomes computationally hard for degenerate instances, where agents' valuations are aligned for many objects.
78,092
Title: On the Existence of Simpler Machine Learning Models. Abstract: The Rashomon effect occurs when many different explanations exist for the same phenomenon. In machine learning, Leo Breiman used this term to describe problems where many accurate-but-different models exist to describe the same data. In this work, we study how the Rashomon effect can be useful for understanding the relationship between training and test performance, and the possibility that simple-yet-accurate models exist for many problems. We introduce the Rashomon set as the set of almost-equally-accurate models for a given problem, and study its properties and the types of models it could contain. We present the Rashomon ratio as a new measure related to simplicity of model classes, which is the ratio of the volume of the set of accurate models to the volume of the hypothesis space; the Rashomon ratio is different from standard complexity measures from statistical learning theory. For a hierarchy of hypothesis spaces, the Rashomon ratio can help modelers to navigate the trade-off between simplicity and accuracy in a surprising way. In particular, we find empirically that a plot of empirical risk vs. Rashomon ratio forms a characteristic $\Gamma$-shaped Rashomon curve, whose elbow seems to be a reliable model selection criterion. When the Rashomon set is large, models that are accurate - but that also have various other useful properties - can often be obtained. These models might obey various constraints such as interpretability, fairness, monotonicity, and computational benefits.
78,099
Title: Bayesian incremental inference update by re-using calculations from belief space planning: a new paradigm Abstract: Inference and decision making under uncertainty are key processes in every autonomous system and numerous robotic problems. In recent years, the similarities between inference and decision making triggered much work, from developing unified computational frameworks to pondering about the duality between the two. In spite of these efforts, inference and control, as well as inference and belief space planning (BSP) are still treated as two separate processes. In this paper we propose a paradigm shift, a novel approach which deviates from conventional Bayesian inference and utilizes the similarities between inference and BSP. We make the key observation that inference can be efficiently updated using predictions made during the decision making stage, even in light of inconsistent data association between the two. We developed a two staged process that implements our novel approach and updates inference using calculations from the precursory planning phase. Using autonomous navigation in an unknown environment along with iSAM2 efficient methodologies as a test case, we benchmarked our novel approach against standard Bayesian inference, both with synthetic and real-world data (KITTI dataset). Results indicate that not only our approach improves running time by at least a factor of two while providing the same estimation accuracy, but it also alleviates the computational burden of state dimensionality and loop closures.
78,109
Title: Finite-Horizon Optimal Control of Boolean Control Networks: A Unified Graph-Theoretical Approach Abstract: This article investigates the finite-horizon optimal control (FHOC) problem of Boolean control networks (BCNs) from a graph theory perspective. We first formulate two general problems to unify various special cases studied in the literature: 1) the horizon length is <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">a priori</i> fixed and 2) the horizon length is unspecified but finite for given destination states. Notably, both problems can incorporate time-variant costs, which are rarely considered in existing work, and a variety of constraints. The existence of an optimal control sequence is analyzed under mild assumptions. Motivated by BCNs’ finite state space and control space, we approach the two general problems intuitively and efficiently under a graph-theoretical framework. A weighted state transition graph and its time-expanded variants are developed, and the equivalence between the FHOC problem and the shortest-path (SP) problem in specific graphs is established rigorously. Two algorithms are developed to find the SP and construct the optimal control sequence for the two problems with reduced computational complexity, though technically, a classical SP algorithm in graph theory is sufficient for all problems. Compared with existing algebraic methods, our graph-theoretical approach can achieve state-of-the-art time efficiency while targeting the most general problems. Furthermore, our approach is the first one capable of solving <xref ref-type="other" rid="other4" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Problem 2</xref> ) with time-variant costs. Finally, a genetic network in the bacterium <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">E. coli</i> and a signaling network involved in human leukemia are used to validate the effectiveness of our approach. The results of two common tasks for both networks show that our approach can dramatically reduce the running time. Python implementation of our algorithms is available at GitHub <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/ShuhuaGao/FHOC</uri> .
78,111
Title: Extremal graphs for edge blow-up of graphs Abstract: Given a graph H and an integer p, the edge blow-up Hp+1 of H is the graph obtained from replacing each edge in H by a clique of order p+1 where the new vertices of the cliques are all distinct. The Turán numbers for edge blow-up of matchings were first studied by Erdős and Moon. In this paper, we determine the range of the Turán numbers for edge blow-up of all bipartite graphs and the exact Turán numbers for edge blow-up of all non-bipartite graphs. In addition, we characterize the extremal graphs for edge blow-up of all non-bipartite graphs. Our results also extend several known results, including the Turán numbers for edge blow-up of stars, paths and cycles. The method we used can also be applied to find a family of counter-examples to a conjecture posed by Keevash and Sudakov in 2004 concerning the maximum number of edges not contained in any monochromatic copy of H in a 2-edge-coloring of Kn.
78,112
Title: Model Agnostic Defence Against Backdoor Attacks in Machine Learning Abstract: Machine learning (ML) has automated a multitude of our day-to-day decision-making domains, such as education, employment, and driving automation. The continued success of ML largely depends on our ability to trust the model we are using. Recently, a new class of attacks called backdoor attacks have been developed. These attacks undermine the user’s trust in ML models. In this article, we present <sc xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Neo</small> , a model agnostic framework to detect and mitigate such backdoor attacks in image classification ML models. For a given image classification model, our approach analyzes the inputs it receives and determines if the model is backdoored. In addition to this feature, we also mitigate these attacks by determining the correct predictions of the poisoned images. We have implemented <sc xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Neo</small> and evaluated it against three state-of-the-art poisoned models. In our evaluation, we show that <sc xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Neo</small> can detect <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\approx$</tex-math></inline-formula> 88% of the poisoned inputs on average and it is as fast as 4.4 ms per input image. We also compare our <sc xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Neo</small> approach with the state-of-the-art defence methodologies proposed for backdoor attacks. Our evaluation reveals that despite being a blackbox approach, <sc xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Neo</small> is more effective in thwarting backdoor attacks than the existing techniques. Finally, we also reconstruct the exact poisoned input for the user to effectively test their systems.
78,117
Title: Quantile Inverse Optimization: Improving Stability in Inverse Linear Programming. Abstract: Inverse linear programming (LP) has received increasing attention due to its potential to generate efficient optimization formulations that can closely replicate the behavior of a complex system. However, inversely inferred parameters and corresponding forward solutions from the existing inverse LP method can be highly sensitive to noise, errors, and uncertainty in the input data, limiting its applicability in data-driven settings. We introduce the notion of inverse and forward stability in inverse LP and propose a novel inverse LP method that determines a set of objective functions that are stable under data imperfection and generate solutions close to the relevant subset of the data. We formulate the inverse model as a mixed-integer program and elucidate its connection to bi-clique problems, which we exploit to develop efficient heuristics. We also show how this method can be used for online learning. We numerically evaluate the stability of the proposed method and demonstrate its practical use in the diet recommendation and transshipment applications.
78,124
Title: Robust Quantum Metrology With Explicit Symmetric States Abstract: Quantum metrology is a promising practical use case for quantum technologies, where physical quantities can be measured with unprecedented precision. In lieu of quantum error correction procedures, near term quantum devices are expected to be noisy, and we have to make do with noisy probe states. We prove that, for a set of carefully chosen symmetric probe states that lie within certain quantum er...
78,125
Title: Faster tensor train decomposition for sparse data Abstract: In recent years, the application of tensors has become more widespread in fields that involve data analytics and numerical computation. Due to the explosive growth of data, low-rank tensor decompositions have become a powerful tool to harness the notorious curse of dimensionality. The main forms of tensor decomposition include CP decomposition, Tucker decomposition, tensor train (TT) decomposition, etc. Each of the existing TT decomposition algorithms, including the TT-SVD and randomized TT-SVD, is successful in the field, but neither can both accurately and efficiently decompose large-scale sparse tensors. Based on previous research, this paper proposes a new quasi-optimal fast TT decomposition algorithm for large-scale sparse tensors with proven correctness and the upper bound of computational complexity derived. It can also efficiently produce sparse TT with no numerical error and slightly larger TT-ranks on demand. In numerical experiments, we verify that the proposed algorithm can decompose sparse tensors in a much faster speed than the TT-SVD, and have advantages on speed, precision and versatility over the randomized TT-SVD and TT-cross. And, with it we can realize large-scale sparse matrix TT decomposition that was previously unachievable, enabling the tensor decomposition based algorithms to be applied in more scenarios.
78,136
Title: Computing zero-dimensional tropical varieties via projections Abstract: We present an algorithm for computing zero-dimensional tropical varieties using projections. Our main tools are fast monomial transforms of triangular sets. Given a Gröbner basis, we prove that our algorithm requires only a polynomial number of arithmetic operations, and, for ideals in shape position, we show that its timings compare well against univariate factorization and backsubstitution. We conclude that the complexity of computing positive-dimensional tropical varieties via a traversal of the Gröbner complex is dominated by the complexity of the Gröbner walk.
78,169
Title: Piece selection and cardinal arithmetic Abstract: We study the effects of piece selection principles on cardinal arithmetic (Shelah style). As an application, we discuss questions of Abe and Usuba. In particular, we show that if lambda >= 2 kappa$\lambda \ge 2<^>\kappa$, then (a) I kappa,lambda$I_{\kappa , \lambda }$ is not (lambda, 2)-distributive, and (b) I kappa,lambda+->(I kappa,lambda+)omega 2$I_{\kappa , \lambda }<^>+ \rightarrow (I_{\kappa , \lambda }<^>+)<^>2_\omega$ does not hold.
78,210
Title: Reinterpretation and extension of entropy correction terms for residual distribution and discontinuous Galerkin schemes: Application to structure preserving discretization Abstract: For the general class of residual distribution (RD) schemes, including many finite element (such as continuous/discontinuous Galerkin) and flux reconstruction methods, an approach to construct entropy conservative/ dissipative semidiscretizations by adding suitable correction terms has been proposed by Abgrall ((2018) [1]). In this work, the correction terms are characterized as solutions of certain optimization problems and are adapted to the SBP-SAT framework, focusing on discontinuous Galerkin methods. Novel generalizations to entropy inequalities, multiple constraints, and kinetic energy preservation for the Euler equations are developed and tested in numerical experiments. For all of these optimization problems, explicit solutions are provided. Additionally, the correction approach is applied for the first time to obtain a fully discrete entropy conservative/dissipative RD scheme. Here, the application of the deferred correction (DeC) method for the time integration is essential. This paper can be seen as describing a systematic method to construct structure preserving discretization, at least for the considered example.
78,228
Title: Steiner 3-diameter, maximum degree and size of a graph. Abstract: The Steiner $k$-diameter $sdiam_k(G)$ of a graph $G$, introduced by Chartrand, Oellermann, Tian and Zou in 1989, is a natural generalization of the concept of classical diameter. When $k=2$, $sdiam_2(G)=diam(G)$ is the classical diameter. The problem of determining the minimum size of a graph of order $n$ whose diameter is at most $d$ and whose maximum is $\ell$ was first introduced by Erd\"{o}s and R\'{e}nyi. In this paper, we generalize the above problem for Steiner $k$-diameter, and study the problem of determining the minimum size of a graph of order $n$ whose Steiner $3$-diameter is at most $d$ and whose maximum is at most $\ell$.
78,243
Title: On the Local Structure of Oriented Graphs - a Case Study in Flag Algebras Abstract: Let G be an n-vertex oriented graph. Let t(G) (respectively i(G)) be the prob-ability that a random set of 3 vertices of G spans a transitive triangle (respectively an independent set). We prove that t(G) + i(G) >= 1/9 - o(n)(1). Our proof uses the method of flag algebras that we supplement with several steps that make it more easily comprehensible. We also prove a stability result and an exact result. Namely, we describe an extremal construction, prove that it is essentially unique, and prove that if H is sufficiently far from that construction, then t(H) + i(H) is significantly larger than 1/9. We go to greater technical detail than is usually done in papers that rely on flag algebras. Our hope is that as a result this text can serve others as a useful introduction to this powerful and beautiful method.
78,283
Title: Memory limitations are hidden in grammar Abstract: The ability to produce and understand an unlimited number of different sentences is a hallmark of human language. Linguists have sought to define the essence of this generative capacity using formal grammars that describe the syntactic dependencies between constituents, independent of the computational limitations of the human brain. Here, we evaluate this independence assumption by sampling sentences uniformly from the space of possible syntactic structures. We find that the average dependency distance between syntactically related words, a proxy for memory limitations, is less than expected by chance in a collection of state-of-the-art classes of dependency grammars. Our findings indicate that memory limitations have permeated grammatical descriptions, suggesting that it may be impossible to build a parsimonious theory of human linguistic productivity independent of non-linguistic cognitive constraints.
78,291
Title: Second-Order Guarantees of Stochastic Gradient Descent in Nonconvex Optimization Abstract: Recent years have seen increased interest in performance guarantees of gradient descent algorithms for nonconvex optimization. A number of works have uncovered that gradient noise plays a critical role in the ability of gradient descent recursions to efficiently escape saddle-points and reach second-order stationary points. Most available works limit the gradient noise component to be bounded with probability one or sub-Gaussian and leverage concentration inequalities to arrive at high-probability results. We present an alternate approach, relying primarily on mean-square arguments and show that a more relaxed relative bound on the gradient noise variance is sufficient to ensure efficient escape from saddle points without the need to inject additional noise, employ alternating step sizes, or rely on a global dispersive noise assumption, as long as a gradient noise component is present in a descent direction for every saddle point.
78,315
Title: On the Asymptotic Convergence and Acceleration of Gradient Methods Abstract: We consider the asymptotic behavior of a family of gradient methods, which include the steepest descent and minimal gradient methods as special instances. It is proved that each method in the family will asymptotically zigzag between two directions. Asymptotic convergence results of the objective value, gradient norm, and stepsize are presented as well. To accelerate the family of gradient methods, we further exploit spectral properties of stepsizes to break the zigzagging pattern. In particular, a new stepsize is derived by imposing finite termination on minimizing two-dimensional strictly convex quadratic function. It is shown that, for the general quadratic function, the proposed stepsize asymptotically converges to the reciprocal of the largest eigenvalue of the Hessian. Furthermore, based on this spectral property, we propose a periodic gradient method by incorporating the Barzilai-Borwein method. Numerical comparisons with some recent successful gradient methods show that our new method is very promising.
78,320
Title: Sensitivity estimation of conditional value at risk using randomized quasi-Monte Carlo Abstract: •Randomized quasi-Monte Carlo (RQMC) is used to CVaR sensitivity estimation.•Strong consistency and error analysis are studied for the proposed RQMC estimator.•Three applications are conducted to show the effectiveness of RQMC.
78,324
Title: Flud: A Hybrid Crowd–Algorithm Approach for Visualizing Biological Networks Abstract: AbstractModern experiments in many disciplines generate large quantities of network (graph) data. Researchers require aesthetic layouts of these networks that clearly convey the domain knowledge and meaning. However, the problem remains challenging due to multiple conflicting aesthetic criteria and complex domain-specific constraints. In this article, we present a strategy for generating visualizations that can help network biologists understand the protein interactions that underlie processes that take place in the cell. Specifically, we have developed Flud, a crowd-powered system that allows humans with no expertise to design biologically meaningful graph layouts with the help of algorithmically generated suggestions. Furthermore, we propose a novel hybrid approach for graph layout wherein crowd workers and a simulated annealing algorithm build on each other’s progress. A study of about 2,000 crowd workers on Amazon Mechanical Turk showed that the hybrid crowd–algorithm approach outperforms the crowd-only approach and state-of-the-art techniques when workers were asked to lay out complex networks that represent signaling pathways. Another study of seven participants with biological training showed that Flud layouts are more effective compared to those created by state-of-the-art techniques. We also found that the algorithmically generated suggestions guided the workers when they are stuck and helped them improve their score. Finally, we discuss broader implications for mixed-initiative interactions in layout design tasks beyond biology.
78,332
Title: Stabilization control for Ito stochastic system with indefinite state and control weight costs Abstract: In standard linear-quadratic (LQ) control, the first step in investigating infinite-horizon optimal control is to derive the stabilisation condition with the optimal LQ controller. This paper focuses on the stabilisation of an Ito stochastic system with indefinite control and state-weighting matrices in the cost functional. A generalised algebraic Riccati equation (GARE) is obtained via the convergence of the generalised differential Riccati equation (GDRE) in the finite-horizon case. More importantly, the necessary and sufficient stabilisation conditions for indefinite stochastic control are obtained. One of the key techniques is that the solution of the GARE is decomposed into a positive semi-definite matrix that satisfies the singular algebraic Riccati equation (SARE) and a constant matrix that is an element of the set satisfying certain linear matrix inequality conditions. Using the equivalence between the GARE and SARE, we reduce the stabilisation of the general indefinite case to that of the definite case, in which the stabilisation is studied using a Lyapunov functional defined by the optimal cost functional subject to the SARE.
78,339
Title: EVENT-TRIGGERED OUTPUT SYNCHRONIZATION OF HETEROGENEOUS NONLINEAR MULTIAGENTS Abstract: This paper addresses the output synchronization problem for heterogeneous nonlinear multiagent systems with distributed event-based controllers. Employing the two-step synchronization process, we first outline the distributed event-triggered consensus controllers for linear reference models under a directed communication topology. It is further shown that the subsequent triggering instants are based on intermittent communication. Secondly, by using certain input-to-state stability (ISS) property, we design an event-triggered perturbed output regulation controller for each nonlinear multiagent. The ISS technique used in this paper is based on the milder condition that each agent has a certain ISS property from input (actuator) disturbance to state rather than measurement (sensor) disturbance to state. With the two-step design, the objective of output synchronization is successfully achieved with Zeno behavior avoided.
78,344
Title: A multi-level neural network for implicit causality detection in web texts Abstract: Mining causality from text is a complex and crucial natural language understanding task corresponding to human cognition. Existing studies on this subject can be divided into two categories: feature engineering-based and neural model-based methods. In this paper, we find that the former has incomplete coverage and intrinsic errors but provides prior knowledge, whereas the latter leverages context information but has insufficient causal inference. To address the limitations, we propose a novel causality detection model named MCDN, which explicitly models the causal reasoning process, and exploits the advantages of both methods. Specifically, we adopt multi-head self-attention to acquire semantic features at the word level and develop the SCRN to infer causality at the segment level. To the best of our knowledge, this is the first time the Relation Network is applied with regard to the causality tasks. The experimental results demonstrate that: i) the proposed method outperforms the strong baselines on causality detection; ii) further analysis manifests the effectiveness and robustness of MCDN.
78,346
Title: A central limit theorem for the two-sided descent statistic on Coxeter groups Abstract: We study the asymptotic behaviour of the statistic (des + ides)(W) which assigns to an element w of a finite Coxeter group W the number of descents of w plus the number of descents of w(-1). Our main result is a central limit theorem for the probability distributions associated to this statistic. This answers a question of Kahle-Stump and builds upon work of Chatterjee-Diaconis, Ozdemir and Rottger.
78,351
Title: Convolutional Recurrent Reconstructive Network for Spatiotemporal Anomaly Detection in Solder Paste Inspection Abstract: Surface mount technology (SMT) is a process for producing printed-circuit boards. The solder paste printer (SPP), package mounter, and solder reflow oven are used for SMT. The board on which the solder paste is deposited from the SPP is monitored by the solder paste inspector (SPI). If SPP malfunctions due to the printer defects, the SPP produces defective products, and then abnormal patterns are detected by SPI. In this article, we propose a convolutional recurrent reconstructive network (CRRN), which decomposes the anomaly patterns generated by the printer defects, from SPI data. CRRN learns only normal data and detects the anomaly pattern through the reconstruction error. CRRN consists of a spatial encoder (S-Encoder), a spatiotemporal encoder and decoder (ST-Encoder-Decoder), and a spatial decoder (S-Decoder). The ST-Encoder-Decoder consists of multiple convolutional spatiotemporal memories (CSTMs) with a spatiotemporal attention (ST-Attention) mechanism. CSTM is developed to extract spatiotemporal patterns efficiently. In addition, an ST-Attention mechanism is designed to facilitate transmitting information from the spatiotemporal encoder to the spatiotemporal decoder, which can solve the long-term dependency problem. We demonstrate that the proposed CRRN outperforms the other conventional models in anomaly detection. Moreover, we show the discriminative power of the anomaly map decomposed by the proposed CRRN through the printer defect classification.
78,361
Title: NL-LinkNet: Toward Lighter But More Accurate Road Extraction With Nonlocal Operations Abstract: Road extraction from very high resolution (VHR) satellite images is one of the most important topics in the field of remote sensing. In this letter, we propose an efficient nonlocal LinkNet with nonlocal blocks (NLBs) that can grasp relations between global features. This enables each spatial feature point to refer to all other contextual information and results in more accurate road segmentation....
78,362
Title: Covering Convex Bodies and the Closest Vector Problem Abstract: We present algorithms for the $$(1+\epsilon )$$ -approximate version of the closest vector problem for certain norms. The currently fastest algorithm (Dadush and Kun 2016) for general norms in dimension n has running time of $$2^{O(n)}(1/\epsilon )^n$$ . We improve this substantially in the following two cases. First, for $$\ell _p$$ -norms with $$p>2$$ (resp. $$p \in [1,2]$$ ) fixed, we present an algorithm with a running time of $$2^{O(n)}(1+1/\epsilon )^{n/2}$$ (resp. $$2^{O(n)} (1+1/\epsilon )^{n/p}$$ ). This result is based on a geometric covering problem, that was introduced in the context of CVP by Eisenbrand et al.: How many convex bodies are needed to cover the ball of the norm such that, if scaled by factor 2 around their centroids, each one is contained in the $$(1+\epsilon )$$ -scaled homothet of the norm ball? We provide upper bounds for this $$(2,\epsilon )$$ -covering number by exploiting the modulus of smoothness of the $$\ell _p$$ -balls. Applying a covering scheme, we can boost any 2-approximation algorithm for CVP to a $$(1+\epsilon )$$ -approximation algorithm with the improved run time, either using a straightforward sampling routine or using the deterministic algorithm of Dadush for the construction of an epsilon net. Second, we consider polyhedral and zonotopal norms. For centrally symmetric polytopes (resp. zonotopes) in $${\mathbb R}^n$$ with O(n) facets (resp. generated by O(n) line segments), we provide a deterministic $$O(\log _2(2+1/\epsilon ))^{O(n)}$$ time algorithm. This generalizes the result of Eisenbrand et al. which applies to the $$\ell _\infty $$ -norm. Finally, we establish a connection between the modulus of smoothness and lattice sparsification. As a consequence, using the enumeration and sparsification tools developped by Dadush, Kun, Peikert, and Vempala, we present a simple alternative to the boosting procedure with the same time and space requirement for $$\ell _p$$ norms. This connection might be of independent interest.
78,366
Title: The double traveling salesman problem with partial last-in-first-out loading constraints Abstract: In this paper, we introduce the double traveling salesman problem with partial last-in-first-out loading constraints (DTSPPL). It is a pickup-and-delivery single-vehicle routing problem, where all pickup operations must be performed before any delivery operation because the pickup-and-delivery areas are geographically separated. The vehicle collects items in the pickup area and loads them into its container, a horizontal stack. After performing all pickup operations, the vehicle begins delivering the items in the delivery area. Loading and unloading operations must obey a partial last-in-first-out (LIFO) policy, that is, a version of the LIFO policy that may be violated within a given reloading depth. The objective of the DTSPPL is to minimize the total cost, which involves the total distance traveled by the vehicle and the number of items that are unloaded and then reloaded due to violations of the standard LIFO policy. We formally describe the DTSPPL through two integer linear programming (ILP) formulations and propose a heuristic algorithm based on the biased random-key genetic algorithm (BRKGA) to find high-quality solutions. The performance of the proposed solution approaches is assessed over a broad set of instances. Computational results have shown that both ILP formulations have been able to solve only the smaller instances, whereas the BRKGA obtained good-quality solutions for almost all instances, requiring short computational times.
78,375
Title: A Novel Approach to the Partial Information Decomposition. Abstract: We consider the "partial information decomposition" (PID) problem, which aims to decompose the information that a set of source random variables provide about a target random variable into separate redundant, synergistic, union, and unique components. In the first part of this paper, we propose a general framework for constructing a multivariate PID. Our framework is defined in terms of a formal analogy with intersection and union from set theory, along with an ordering relation which specifies when one information source is more informative than another. Our definitions are algebraically and axiomatically motivated, and can be generalized to domains beyond Shannon information theory (such as algorithmic information theory and quantum information theory). In the second part of this paper, we use our general framework to define a PID in terms of the well-known Blackwell order, which has a fundamental operational interpretation. We demonstrate our approach on numerous examples and show that it overcomes many drawbacks associated with previous proposals.
78,380
Title: VOP detection for read and conversation speech using CWT coefficients and phone boundaries Abstract: In this paper, we propose a novel approach for accurate detection of vowel onset points (VOPs). A VOP is the instant at which a vowel begins in a speech signal. Precise identification of VOPs is important for various speech applications such as speech segmentation and speech rate modification. Existing methods detect the majority of VOPs to an accuracy of 40 ms deviation, which may not be appropriate for the above speech applications. To address this issue, we proposed a two-stage approach for accurate detection of VOPs. At the first stage, VOPs are detected using continuous wavelet transform coefficients, and the position of the detected VOPs are corrected using phone boundaries in the second stage. The phone boundaries are detected by the spectral transition measure method. Experiments are done using TIMIT and Bengali speech corpora. Performance of the proposed approach is compared with two standard signal processing based methods as well as with a recent VOP detection technique. The evaluation results show that the proposed method performs better than the existing methods.
78,383
Title: Delivering Scientific Influence Analysis as a Service on Research Grants Repository Abstract: Research grants have played an important role in seeding and promoting fundamental research projects worldwide. There is a growing demand for developing and delivering scientific influence analysis as a service on research grant repositories. Such analysis can provide insight on how research grants help foster new research collaborations, encourage cross-organization collaborations, influence new research trends, and identify technical leadership. This article presents the design and development of a grant-based scientific influence analysis service, coined as <small>GImpact</small> . It takes a graph-theoretic approach to design and develop the scientific influence analysis algorithms over a real research-grant repository with three original contributions. First, we model the scientific influence analysis problem as a graph-based analysis problem by constructing heterogeneous graphs from the grants dataset, including mining the dataset to identify and extract important features and represent such features as a research grants information network. Second, we develop the scientific influence analysis algorithms over the research grants information network, which compute the overall scientific influence score by integrating self-influence score and multiple co-influence scores. The self-influence score reflects the grant-based research collaborations among institutions, and the co-influence scores reflect various types of cross-institution collaborations in terms of disciplines and keywords (subject areas). Third, we leverage the cluster analysis on the institution graph as an example application of scientific influence analysis service. By partitioning the institution graph into <inline-formula><tex-math notation="LaTeX">$K$</tex-math></inline-formula> clusters, with <inline-formula><tex-math notation="LaTeX">$K$</tex-math></inline-formula> as one of the service interface parameters, we show how different disciplines and different keywords are co-related through the grant-based influence analysis. We evaluate <small>GImpact</small> using a real grants dataset, consisting of 2512 institutions and their grants received over a period of 14 years. Our experimental results show that the <small>GImpact</small> influence analysis approach can effectively identify the grant-based research collaboration groups and provide valuable insight on an in-depth understanding of the scientific influence of research grants on research programs, institution leadership, and future collaboration opportunities.
78,386
Title: Finitary codings for gradient models and a new graphical representation for the six-vertex model Abstract: It is known that the Ising model on DOUBLE-STRUCK CAPITAL Zd at a given temperature is a finitary factor of an i.i.d. process if and only if the temperature is at least the critical temperature. Below the critical temperature, the plus and minus states of the Ising model are distinct and differ from one another by a global flip of the spins. We show that it is only this global information which poses an obstruction to being finitary by showing that the gradient of the Ising model is a finitary factor of i.i.d. at all temperatures. As a consequence, we deduce a volume-order large deviation estimate for the energy. Results in the same spirit are shown for the Potts model, the so-called beach model, and the six-vertex model. We also introduce a coupling between the six-vertex model with c >= 2 and a new Edwards-Sokal type graphical representation of it, which we believe is of independent interest.
78,402
Title: Residual objectness for imbalance reduction Abstract: •We discover that the foreground-background imbalance in object detection could be addressed in a learning-based manner, without any hard-crafted resampling and reweighting schemes.•We propose a novel Residual Objectness (ResObj) mechanism to address the foreground-background imbalance in training object detectors. With a cascade architecture to gradually refine the objectness estimation, our ResObj module could address the imbalance in an endto- end way, thus avoiding laborious hyper-parameters tuning required by resampling and reweighting schemes.•We validate the proposed method on the COCO dataset with thorough ablation studies. For various detectors, our Residual Objectness steadily improves relative 3%∼4% detection accuracy.
78,404
Title: A novel method to generate key-dependent s-boxes with identical algebraic properties Abstract: The s-box plays the vital role of creating confusion between the ciphertext and secret key in any cryptosystem, and is the only nonlinear component in many block ciphers. Dynamic s-boxes, as compared to static, improve entropy of the system, hence leading to better resistance against linear and differential attacks. It was shown in Easttom (2018) that while incorporating dynamic s-boxes in cryptosystems is sufficiently secure, they do not keep non-linearity invariant. This work provides an algorithmic scheme to generate key-dependent dynamic n×n clone s-boxes having the same algebraic properties namely bijection, nonlinearity, the strict avalanche criterion (SAC), the output bits independence criterion (BIC) as of the initial seed s-box. The method is based on group action of symmetric group Sn and a subgroup S2n respectively on columns and rows of Boolean functions (GF(2n)→GF(2)) of s-box. Invariance of the bijection, nonlinearity, SAC, and BIC for the generated clone copies is proved. As illustration, examples are provided for n=8 and n=4 along with comparison of the algebraic properties of the clone and initial seed s-box. The proposed method is an extension of Hussain et al. (2012); Hussain et al. (2012); Hussain et al. (2018); Anees and Chen (2020) which involved group action of S8 only on columns of Boolean functions (GF(28)→GF(2) ) of s-box. For n=4, we have used an initial 4 × 4 s-box constructed by Carlisle Adams and Stafford Tavares (Adams and Tavares, 1990) to generated (4!)2 clone copies. For n=8, it can be seen (Hussain et al. (2012); Hussain et al. (2012); Hussain et al. (2018); Anees and Chen (2020)) that the number of clone copies that can be constructed by permuting the columns is 8!. For each column permutation, the proposed method enables to generate 8! clone copies by permuting the rows.
78,410
Title: A stochastic MPC scheme for distributed systems with multiplicative uncertainty Abstract: This paper presents a Distributed Stochastic Model Predictive Control algorithm for networks of linear systems with multiplicative uncertainties and local chance constraints on the states and control inputs. The chance constraints are approximated via Cantelli’s inequality by means of expected value and covariance. The cooperative control algorithm is based on the distributed Alternating Direction Method of Multipliers, which renders the controller fully distributedly implementable, recursively feasible and ensures point-wise convergence of the states. The aforementioned properties are guaranteed through a properly selected distributed invariant set and distributed terminal constraints for the mean and covariance. The paper closes with an example highlighting the chance constraint satisfaction, numerical properties and scalability of our approach.
78,414
Title: Simplicial Dollar Game Abstract: The dollar game is a chip-firing game introduced by Baker as a context in which to formulate and prove the Riemann-Roch theorem for graphs. A divisor on a graph is a formal integer sum of vertices. Each determines a dollar game, the goal of which is to transform the given divisor into one that is effective (nonnegative) using chipfiring moves. We use Duval, Klivans, and Martin???s theory of chip-firing on simplicial complexes to generalize the dollar game and results related to the Riemann-Roch theorem for graphs to higher dimensions. In particular, we extend the notion of the degree of a divisor on a graph to a (multi)degree of a chain on a simplicial complex and use it to establish two main results. The first of these generalizes the fact that if a divisor on a graph has large enough degree (at least as large as the genus of the graph), it is winnable; and the second generalizes the fact that trees (graphs of genus 0) are exactly the graphs on which every divisor of degree 0, interpreted as an instance of the dollar game, is winnable.
78,417