abstr
stringlengths
5
68.5k
title
stringlengths
7
794
journal
stringlengths
2
221
field
stringlengths
2
19
label_journal
int64
0
2.73k
label_field
int64
0
75
We analyze the statistics of loops formation in $f$-branched star polymers in an environment with structural defects, correlated at large distances $r$ according to a power law $\sim r^{-a}$. Applying the direct polymer renormalization approach, we found the values of the set of universal exponents, governing the scaling of probabilities of various types of loops in macromolecules.
Probability of loops formation in star polymers in long range correlated disorder
The Journal of Chemical Physics
cond-mat
2,503
14
The spin-boson model is a simplified Hamiltonian often used to study non-adiabatic dynamics in large condensed phase systems, even though it has not been solved in a fully analytic fashion. Herein, we present an exact analytic expression for the dynamics of the spin-boson model in the infinitely slow bath limit and generalize it to approximate dynamics for faster baths. We achieve the latter by developing a hybrid approach that combines the exact slow-bath result with the popular NIBA method to generate a memory kernel that is formally exact to second order in the diabatic coupling but also contains higher-order contributions approximated from the second order term alone. This kernel has the same computational complexity as NIBA, but is found to yield dramatically superior dynamics in regimes where NIBA breaks down---such as systems with large diabatic coupling or energy bias. This indicates that this hybrid approach could be used to cheaply incorporate higher order effects into second order methods, and could potentially be generalized to develop alternate kernel resummation schemes.
A hybrid memory kernel approach for condensed phase non-adiabatic dynamics
The Journal of Chemical Physics
physics
2,503
56
Neutral metal-containing molecules and clusters present a particular challenge to velocity map imaging techniques. Common methods of choice for producing such species such as laser ablation or magnetron sputtering typically generate a wide variety of metal-containing species and, without the possibility of mass-selection, even determining the identity of the dissociating moiety can be challenging. In recent years, we have developed a velocity map imaging spectrometer equipped with a laser ablation source explicitly for studying neutral metal-containing species. Here, we report the results of velocity map imaging photofragmentation studies of MoO and CrO. In both cases, dissociation at the two- and three-photon level leads to fragmentation into a range of product channels, some of which can be confidently assigned to particular Mo (Cr) and O atom quantum states. Analysis of the kinetic energy release spectra as a function of photon energy allows precise determination of the ground state dissociation energies of MoO,respectively.
Photofragmentation dynamics and dissociation energies of MoO and CrO
The Journal of Chemical Physics
physics
2,503
56
Master equations are increasingly popular for the simulation of time-dependent electronic transport in nanoscale devices. Several recent Markovian approaches use "extended reservoirs" - explicit degrees of freedom associated with the electrodes - distinguishing them from many previous classes of master equations. Starting from a Lindblad equation, we develop a common foundation for these approaches. Due to the incorporation of explicit electrode states, these methods do not require a large bias or even "true Markovianity" of the reservoirs. Nonetheless, their predictions are only physically relevant when the Markovian relaxation is weaker than the thermal broadening and when the extended reservoirs are "sufficiently large," in a sense that we quantify. These considerations hold despite complete positivity and respect for Pauli exclusion at any relaxation strength.
Master Equations for Electron Transport: The Limits of the Markovian Limit
The Journal of Chemical Physics
cond-mat
2,503
14
We theoretically study the dynamic time evolution following laser pulse pumping in an antiferromagnetic insulator Cr$_{2}$O$_{3}$. From the photoexcited high-spin quartet states to the long-lived low-spin doublet states, the ultrafast demagnetization processes are investigated by solving the dissipative Schr\"odinger equation. We find that the demagnetization times are of the order of hundreds of femtosecond, in good agreement with recent experiments. The switching times could be strongly reduced by properly tuning the energy gaps between the multiplet energy levels of Cr$^{3+}$. Furthermore, the relaxation times also depend on the hybridization of atomic orbitals in the first photoexcited state. Our results suggest that the selective manipulation of electronic structure by engineering stress-strain or chemical substitution allows effective control of the magnetic state switching in photoexcited insulating transition-metal oxides.
Model of ultrafast demagnetization driven by spin-orbit coupling in a photoexcited antiferromagnetic insulator Cr2O3
The Journal of Chemical Physics
cond-mat
2,503
14
We present a method based on graph theory for evaluation of the inelastic propensity rules for molecules exhibiting complete destructive quantum interference in their elastic transmission. The method uses an extended adjacency matrix corresponding to the structural graph of the molecule for calculating the Green function between the sites with attached electrodes and consequently states the corresponding conditions the electron-vibration coupling matrix must meet for the observation of an inelastic signal between the terminals. The method can be fully automated and we provide a functional website running a code using Wolfram Mathematica, which returns a graphical depiction of destructive quantum interference configurations together with the associated inelastic propensity rules for a wide class of molecules.
Graph-theoretical evaluation of the inelastic propensity rules for molecules with destructive quantum interference
The Journal of Chemical Physics
cond-mat
2,503
14
The Mixed Quantum-Classical Initial Value Representation (MQC-IVR) is a recently introduced approximate semiclassical (SC) method for the calculation of real-time quantum correlation functions. MQC-IVR employs a modified Filinov filtration (MFF) scheme to control the overall phase of the SC integrand, extending the applicability of SC methods to complex systems while retaining their ability to accurately describe quantum coherence effects. Here, we address questions regarding the effectiveness of the MFF scheme in combination with SC dynamics. Previous work showed that this filtering scheme is of limited utility in the context of semiclassical wavepacket propagation, but we find the MFF is extraordinarily powerful in the context of correlation functions. By examining trajectory phase and amplitude contributions to the real-time SC correlation function in a model system, we clearly demonstrate that the MFF serves to reduce noise by damping amplitude only in regions of highly oscillatory phase leading to a reduction in computational effort while retaining accuracy. Further, we introduce a novel and efficient MQC-IVR formulation that allows for linear scaling in computational cost with the total simulation length, a significant improvement over the more-than quadratic scaling exhibited by the original method.
Validating and Implementing Modified Filinov Phase Filtration in Semiclassical Dynamics
The Journal of Chemical Physics
physics
2,503
56
We first argue that the covalent bond and the various closed-shell interactions can be thought of as symmetry broken versions of one and the same interaction, viz., the multi-center bond. We use specially chosen molecular units to show that the symmetry breaking is controlled by density and electronegativity variation. We show that the bond order changes with bond deformation but in a step-like fashion, regions of near constancy separated by electronic localization transitions. These will often cause displacive transitions as well so that the bond strength, order, and length are established self-consistently. We further argue on the inherent relation of the covalent, closed-shell, and multi-center interactions with ionic and metallic bonding. All of these interactions can be viewed as distinct sectors on a phase diagram with density and electronegativity variation as control variables; the ionic and covalent/secondary sectors are associated with on-site and bond-order charge density wave respectively, the metallic sectorwith an electronic fluid. While displaying a contiguity at low densities, the metallic and ionic interactions represent distinct phases separated by discontinuous transitions at sufficiently high densities. Multi-center interactions emerge as a hybrid of the metallic and ionic bond that results from spatial coexistence of delocalized and localized electrons. In the present description, the issue of the stability of a compound is that of mutual miscibility of electronic fluids with distinct degrees of electron localization, supra-atomic ordering in complex inorganic compounds comes about naturally. The notions of electronic localization advanced hereby suggest a high throughput, automated procedure for screening candidate compounds and structures with regard to stability, without the need for computationally costly geometric optimization.
The chemical bond as an emergent phenomenon
The Journal of Chemical Physics
cond-mat
2,503
14
Anisotropic displacement parameters (ADPs) are commonly used in crystallography, chemistry and related fields to describe and quantify thermal motion of atoms. Within the very recent years, these ADPs have become predictable by lattice dynamics in combination with first-principles theory. Here, we study four very different molecular crystals, namely urea, bromomalonic aldehyde, pentachloropyridine, and naphthalene, by first-principles theory to assess the quality of ADPs calculated in the quasi-harmonic approximation. In addition, we predict both thermal expansion and thermal motion within the quasi-harmonic approximation and compare the predictions with experimental data. Very reliable ADPs are calculated within the quasi-harmonic approximation for all four cases up to at least 200 K, and they turn out to be in better agreement with experiment than the harmonic ones. In one particular case, ADPs can even reliably be predicted up to room temperature. Our results also hint at the importance of normal-mode anharmonicity in the calculation of ADPs.
Lattice thermal expansion and anisotropic displacements in urea, bromomalonic aldehyde, pentachloropyridine and naphthalene
The Journal of Chemical Physics
cond-mat
2,503
14
Multicomponent density functional theory (DFT) enables the consistent quantum mechanical treatment of both electrons and protons. A major challenge has been the design of electron-proton correlation functionals that produce even qualitatively accurate proton densities. Herein an electron-proton correlation functional, epc17, is derived analogously to the Colle-Salvetti formalism for electron correlation and is implemented within the nuclear-electronic orbital (NEO) framework. The NEO-DFT/epc17 method produces accurate proton densities efficiently and is promising for diverse applications.
Development of a Practical Multicomponent Density Functional for Electron-Proton Correlation to Produce Accurate Proton Densities
The Journal of Chemical Physics
physics
2,503
56
The aim of the paper is the consideration of particle collisions in the vicinity of the horizon of rotating black holes. Existence of geodesics for massive and massless particles arising from the region inside of the gravitational radius in ergosphere leads to the different possibilities to get very high energy in the centre mass frame of two particles. The classification of all such geodesics on the basis of the proved theorem for extremal spherical orbits is given. Case of the unlimited growth of energy for the situation when one of the particles (the critical one) is moving along the "white hole" geodesic with the close to the upper limit angular momentum while the other particle is on the usual geodesic and the case of the unlimited negative angular momentum of the first particle are considered.
High energy physics in the vicinity of rotating black holes
Theor. Math. Phys.
gr-qc
2,608
30
We consider the gravity interacting with matter scalar fields and quantized in the minisuperspace approach in which the wave functional is described by the Wheeler-DeWitt equations (WdW). Assuming the domination of the homogeneous and isotropic geometry the leading contributions to the wave functional in the approximation of the minisuperspace with Friedmann-Robertson-Walker metric (FRW) and spatially uniform scalar fields are considered. The model of several scalar fields with exponential potentials and kinetic terms admitting such a special mixing that ultimately it is possible to separate the variables in the WdW equation and to find its exact solution in terms of the special functions is proposed. The semiclassical approximation is thoroughly investigated and the boundary conditions permitting the physical solution selection for classical cosmologies are chosen.
Quantum cosmology of the multi-field scalar matter: some exact solutions
Theor. Math. Phys.
hep-th
2,608
35
We study effects of turbulent mixing on the random growth of an interface in the problem of the deposition of a substance on a substrate. The growth is modelled by the well-known Kardar--Parisi--Zhang model. The turbulent advecting velocity field is modelled by the Kraichnan's rapid-change ensemble: Gaussian statistics with the correlation function $\langle vv\rangle \propto \delta(t-t') \, k^{-d-\xi}$, where $k$ is the wave number and $0<\xi<2$ is a free parameter. Effects of compressibility of the fluid are studied. Using the field theoretic renormalization group we show that, depending on the relation between the exponent $\xi$ and the spatial dimension $d$, the system reveals different types of large-scale, long-time asymptotic behaviour, associated with four possible fixed points of the renormalization group equations. In addition to known regimes (ordinary diffusion, ordinary growth process, and passively advected scalar field), existence of a new nonequilibrium universality class is established. Practical calculations of the fixed point coordinates, their regions of stability and critical dimensions are calculated to the first order of the double expansion in $\xi$ and $\varepsilon=2-d$ (one-loop approximation). It turns out that for incompressible fluid, the most realistic values $\xi=4/3$ or 2 and $d=1$ or 2 correspond to the case of passive scalar field, when the nonlinearity of the KPZ model is irrelevant and the interface growth is completely determined by the turbulent transfer. If the compressibility becomes strong enough, the crossover in the critical behaviour occurs, and these values of $d$ and $\xi$ fall into the region of stability of the new regime, where the advection and the nonlinearity are both important.
Random interface growth in random environment: Renormalization group analysis of a simple model
Theor. Math. Phys.
cond-mat
2,608
14
The renormalization problem of (2+1)-dimensional Yang-Mills theory quantized on the light front is considered. Extra fields analogous to those used in Pauli-Villars regularization are introduced to restore perturbative equivalence between such quantized theory and conventional formulation in Lorentz coordinates. These fields also provide necessary ultraviolet regularization to the theory. Obtained results allow to construct renormalized Hamiltonian of the theory on the light front.
Pauli-Villars Regularization and Light Front Hamiltonian in (2+1)-dimensional Yang-Mills Theory
Theor. Math. Phys.
hep-th
2,608
35
In the approach to the geometric quantization, based on the conversion of second-class constraints, we resolve the respective non-linear zero curvature conditions for the extended symplectic potential. From the zero curvature conditions, we deduce new, linear, equations for the extended symplectic potential. Then we show that being the linear equations satisfied, their solution does certainly satisfy the non-linear zero curvature condition, as well. Finally, we give the functional resolution to the new linear equations, and then deduce the respective path integral representation. We do our consideration as to the general case of a phase superspace where both Boson and Fermion coordinates are present on equal footing.
Conversion of second-class constraints and resolving the zero curvature conditions in the geometric quantization theory
Theor. Math. Phys.
hep-th
2,608
35
A theorem of Feigin, Frenkel and Reshetikhin provides expressions for the eigenvalues of the higher Gaudin Hamiltonians on the Bethe vectors in terms of elements of the center of the affine vertex algebra at the critical level. In our recent work, explicit Harish-Chandra images of generators of the center were calculated in all classical types. We combine these results to calculate the eigenvalues of the higher Gaudin Hamiltonians on the Bethe vectors in an explicit form. The Harish-Chandra images can be interpreted as elements of classical $W$-algebras. We provide a direct connection between the rings of $q$-characters and classical $W$-algebras by calculating classical limits of the corresponding screening operators.
Eigenvalues of Bethe vectors in the Gaudin model
Theor. Math. Phys.
math
2,608
41
It has been observed in [Park 2014] that the physical states of the ADM formulation of 4D Einstein gravity holographically reduce and can be described by a 3D language. Obviously the approach poses the 4D covariance issue; it turns out that there are two covariance issues whose address is the main theme of the present work. Although the unphysical character of the trace piece of the fluctuation metric has been long known, it has not been taken care of in a manner suitable for the Feynman diagram computations; a proper handling of the trace piece through gauge-fixing is the key to more subtler of the covariance issues. As for the second covariance issue, a renormalization program can be carried out covariantly to any loop order at intermediate steps, thereby maintaining the 4D covariance; it is only at the final stage that one should consider the 3D physical external states. With the physical external states, the 1PI effective action reduces to 3D and renormalizability is restored just as in the entirely-3D approach of [Park 2014]. We revisit the one-loop two-point renormalization with careful attention to the trace piece of the fluctuation metric and in particular outline one-loop renormalization of the Newton's constant.
4D covariance of holographic quantization of Einstein gravity
Theor. Math. Phys.
hep-th
2,608
35
In hypercube approach to correlation functions in Chern-Simons theory (knot polynomials) the central role is played by the numbers of cycles, in which the link diagram is decomposed under different resolutions. Certain functions of these numbers are further interpreted as dimensions of graded spaces, associated with hypercube vertices. Finding these functions is, however, a somewhat non-trivial problem. In arXiv:1506.07516 it was suggested to solve it with the help of the matrix model technique, in the spirit of AMM/EO topological recursion. In this paper we further elaborate on this idea and provide a vast collection of non-trivial examples, related both to ordinary and virtual links and knots. Remarkably, most powerful versions of the formalism freely convert ordinary knots/links to virtual and back -- moreover, go beyond the knot-related set of the (2,2)-valent graphs.
Matrix model and dimensions at hypercube vertices
Theor. Math. Phys.
hep-th
2,608
35
We consider the one-dimensional Schr\"odinger equation $-f"+q_\kappa f = Ef$ on the positive half-axis with the potential $q_\kappa(r)=(\kappa^2-1/4)r^{-2}$. For each complex number $\vartheta$, we construct a solution $u^\kappa_\vartheta(E)$ of this equation that is analytic in $\kappa$ in a complex neighborhood of the interval $(-1,1)$ and, in particular, at the "singular" point $\kappa = 0$. For $-1<\kappa<1$ and real $\vartheta$, the solutions $u^\kappa_\vartheta(E)$ determine a unitary eigenfunction expansion operator $U_{\kappa,\vartheta}\colon L_2(0,\infty)\to L_2(\mathbb R,\mathcal V_{\kappa,\vartheta})$, where $\mathcal V_{\kappa,\vartheta}$ is a positive measure on $\mathbb R$. We show that every self-adjoint realization of the formal differential expression $-\partial^2_r + q_\kappa(r)$ for the Hamiltonian is diagonalized by the operator $U_{\kappa,\vartheta}$ for some $\vartheta\in\mathbb R$. Using suitable singular Titchmarsh-Weyl $m$-functions, we explicitly find the measures $\mathcal V_{\kappa,\vartheta}$ and prove their continuity in $\kappa$ and $\vartheta$.
Eigenfunction expansions for the Schr\"odinger equation with inverse-square potential
Theor. Math. Phys.
math-ph
2,608
42
We survey the matrix product solutions of the Yang-Baxter equation obtained recently from the tetrahedron equation. They form a family of quantum $R$ matrices of generalized quantum groups interpolating the symmetric tensor representations of $U_q(A^{(1)}_{n-1})$ and the anti-symmetric tensor representations of $U_{-q^{-1}}(A^{(1)}_{n-1})$. We show that at $q=0$ they all reduce to the Yang-Baxter maps called combinatorial $R$, and describe the latter by explicit algorithm.
Combinatorial Yang-Baxter maps arising from tetrahedron equation
Theor. Math. Phys.
math
2,608
41
We suggest to use the Hall-Littlewood version of Rosso-Jones formula to define the germs of $p$-adic HOMFLY-PT polynomials for torus knots $[m,n]$, which possess at least the $[m,n] \longleftrightarrow [n,m]$ topological invariance. This calls for generalizations to other knot families and is a challenge for several branches of modern theory.
Are there p-adic knot invariants?
Theor. Math. Phys.
hep-th
2,608
35
Let $A$ be a self-adjoint operator in a separable Hilbert space. Suppose that the spectrum of $A$ is formed of two isolated components $\sigma_0$ and $\sigma_1$ such that the set $\sigma_0$ lies in a finite gap of the set $\sigma_1$. Assume that $V$ is a bounded additive self-adjoint perturbation of $A$, off-diagonal with respect to the partition ${\rm spec}(A)=\sigma_0 \cup \sigma_1$. It is known that if $\|V\|<\sqrt{2}{\rm dist}(\sigma_0,\sigma_1)$, then the spectrum of the perturbed operator $L=A+V$ consists of two disjoint parts $\omega_0$ and $\omega_1$ which originate from the corresponding initial spectral subsets $\sigma_0$ and $\sigma_1$. Moreover, for the difference of the spectral projections $E_A(\sigma_0)$ and $E_{L}(\omega_0)$ of $A$ and $L$ associated with the spectral sets $\sigma_0$ and $\omega_0$, respectively, the following sharp norm bound holds: $$\|E_A(\sigma_0)-E_{L}(\omega_0)\|\leq\sin\left(\arctan\frac{\|V\|}{{\rm dist}(\sigma_0,\sigma_1)}\right).$$ In the present note, we give a new proof of this bound for $\|V\|<{\rm dist}(\sigma_0,\sigma_1)$.
An alternative proof of the a priori $\tan\Theta$ Theorem
Theor. Math. Phys.
math
2,608
41
We demonstrate that statistics for several types of set partitions are described by generating functions which appear in the theory of integrable equations.
Set partitions and integrable hierarchies
Theor. Math. Phys.
nlin
2,608
48
We review the Reshetikhin-Turaev approach to construction of non-compact knot invariants involving R-matrices associated with infinite-dimensional representations, primarily those made from Faddeev's quantum dilogarithm. The corresponding formulas can be obtained from modular transformations of conformal blocks as their Kontsevich-Soibelman monodromies and are presented in the form of transcendental integrals, where the main issue is manipulation with integration contours. We discuss possibilities to extract more explicit and handy expressions which can be compared with the ordinary (compact) knot polynomials coming from finite-dimensional representations of simple Lie algebras, with their limits and properties. In particular, the quantum A-polynomials, difference equations for colored Jones polynomials should be the same, just in non-compact case the equations are homogeneous, while they have a non-trivial right-hand side for ordinary Jones polynomials.
SU(2)/SL(2) knot invariants and KS monodromies
Theor. Math. Phys.
hep-th
2,608
35
As in the case of soliton PDEs in 2+1 dimensions, the evolutionary form of integrable dispersionless multidimensional PDEs is non-local, and the proper choice of integration constants should be the one dictated by the associated Inverse Scattering Transform (IST). Using the recently made rigorous IST for vector fields associated with the so-called Pavlov equation $v_{xt}+v_{yy}+v_xv_{xy}-v_yv_{xx}=0$, we have recently esatablished that, in the nonlocal part of its evolutionary form $v_{t}= v_{x}v_{y}-\partial^{-1}_{x}\,\partial_{y}\,[v_{y}+v^2_{x}]$, the formal integral $\partial^{-1}_{x}$ corresponding to the solutions of the Cauchy problem constructed by such an IST is the asymmetric integral $-\int_x^{\infty}dx'$. In this paper we show that this results could be guessed in a simple way using a, to the best of our knowledge, novel integral geometry lemma. Such a lemma establishes that it is possible to express the integral of a fairly general and smooth function $f(X,Y)$ over a parabola of the $(X,Y)$ plane in terms of the integrals of $f(X,Y)$ over all straight lines non intersecting the parabola. A similar result, in which the parabola is replaced by the circle, is already known in the literature and finds applications in tomography. Indeed, in a two-dimensional linear tomographic problem with a convex opaque obstacle, only the integrals along the straight lines non-intersecting the obstacle are known, and in the class of potentials $f(X,Y)$ with polynomial decay we do not have unique solvability of the inverse problem anymore. Therefore, for the problem with an obstacle, it is natural not to try to reconstruct the complete potential, but only some integral characteristics like the integral over the boundary of the obstacle. Due to the above two lemmas, this can be done, at the moment, for opaque bodies having as boundary a parabola and a circle (an ellipse).
An integral geometry lemma and its applications: the nonlocality of the Pavlov equation and a tomographic problem with opaque parabolic objects
Theor. Math. Phys.
nlin
2,608
48
We study instant conformal symmetry breaking as a holographic effect of ultrarelativistic particles moving in the AdS3 spacetime. We give the qualitative picture of this effect probing it by two-point correlation functions and the entanglement entropy of the corresponding boundary theory. We show that within geodesic approximation the ultra-relativistic massless defect due to gravitational lensing of the geodesics, produces a zone structure for correlators with broken conformal invariance. Meanwhile, the holographic entanglement entropy also exhibits a transition to the non-conformal behaviour. Two colliding massless defects produce more diverse zone structure for correlators and the entanglement entropy.
Holographic Dual to Conical Defects: II. Colliding Ultrarelativistic Particles
Theor. Math. Phys.
hep-th
2,608
35
We consider the problem of the existence of global embeddings of metrics of spherically symmetric black holes into an ambient space with the minimal possible dimension. We classify the possible types of embeddings by the type of realization of the metric symmetry by ambient space symmetries. For the Schwarzschild, Schwarzschild-de Sitter, and Reissner-Nordstrom black holes, we prove that the known global embeddings are the only ones. We obtain a new global embedding for the Reissner-Nordstrom-de Sitter metrics and prove that constructing such embeddings is impossible for the Schwarzschild-anti-de Sitter metric. We also discuss the possibility of constructing global embeddings of the Reissner-Nordstrom-anti-de Sitter metric.
Classification of global minimal embeddings for nonrotating black holes
Theor. Math. Phys.
gr-qc
2,608
30
We present results relevant to the relation between quantum effects in a Riemannian space and on the surface appearing as a result of its isometric embedding in a flat space of a higher dimension. We discuss the mapping between the Hawking effect fixed by an observer in the Riemannian space with a horizon and the Unruh effect related to an accelerated motion of this observer in the ambient space. We present examples for which this mapping holds and examples for which there is no mapping. We describe the general form of the hyperbolic embedding of the metric with a horizon smoothly covering the horizon and prove that there is a Hawking into Unruh mapping for this embedding. We also discuss the possibility of relating two-point functions in a Riemannian space and the ambient space in which it is embedded. We obtain restrictions on the geometric parameters of the embedding for which such a relation is known.
Relation between quantum effects in General Relativity and embedding theory
Theor. Math. Phys.
gr-qc
2,608
30
We study extensions of N-wave systems with PT-symmetry. The types of (nonlocal) reductions leading to integrable equations invariant with respect to P- (spatial reflection) and T- (time reversal) symmetries is described. The corresponding constraints on the fundamental analytic solutions and the scattering data are derived. Based on examples of 3-wave (related to the algebra sl(3,C)) and 4-wave (related to the algebra so(5,C)) systems, the properties of different types of 1- and 2-soliton solutions are discussed. It is shown that the PT symmetric 3-wave equations may have regular multi-soliton solutions for some specific choices of their parameters.
On the N-wave Equations with PT-symmetry
Theor. Math. Phys.
nlin
2,608
48
We study properties of particles with zero or negative energy and a nonzero orbital angular momentum in the ergosphere of a rotating black hole. We show that the sign of the particle energy is uniquely determined by the angular velocity of its rotation in the ergosphere. We give a simple proof of the fact that extreme black holes cannot exist. We investigate the question of the possibility of an unlimited energy increase in the center-of-mass system of two colliding particles, one or both of which have negative or zero energy.
Black holes and particles with zero or negative energy
Theor. Math. Phys.
gr-qc
2,608
30
Two-dimensional Scarf~II quantum model is considered in the framework of Supersymmetrical Quantum Mechanics (SUSY QM). Previously obtained results for this integrable system are systematized, and some new properties are derived. In particular, it is shown that the model is exactly or quasi-exactly solvable in different regions of parameter of the system. The degeneracy of the spectrum is detected for some specific values of parameters. The action of the symmetry operators of fourth order in momenta is calculated for the arbitrary wave functions, obtained by means of double shape invariance.
Some Properties of the Shape Invariant Two-Dimensional Scarf II Model
Theor. Math. Phys.
quant-ph
2,608
65
Domain theory has a long history of applications in theoretical computer science and mathematics. In this article, we explore the relation of domain theory to probability theory and stochastic processes. The goal is to establish a theory in which Polish spaces are replaced by domains, and measurable maps are replaced by Scott-continuous functions. We illustrate the approach by recasting one of the fundamental results of stochastic process theory -- Skorohod's Representation Theorem -- in domain-theoretic terms. We anticipate the domain-theoretic version of results like Skorohod's Theorem will improve our understanding of probabilistic choice in computational models, and help devise models of probabilistic programming, with its focus on programming languages that support sampling from distributions where the results are applied to Bayesian reasoning.
Domains and Stochastic Processes
Theoretical Computer Science
math
2,609
41
Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns. In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversarial example, and the feature robustness problem, which aims to quantify the robustness of individual features to adversarial perturbations. We demonstrate that, under the assumption of Lipschitz continuity, both problems can be approximated using finite optimisation by discretising the input space, and the approximation has provable guarantees, i.e., the error is bounded. We then show that the resulting optimisation problems can be reduced to the solution of two-player turn-based games, where the first player selects features and the second perturbs the image within the feature. While the second player aims to minimise the distance to an adversarial example, depending on the optimisation objective the first player can be cooperative or competitive. We employ an anytime approach to solve the games, in the sense of approximating the value of a game by monotonically improving its upper and lower bounds. The Monte Carlo tree search algorithm is applied to compute upper bounds for both games, and the Admissible A* and the Alpha-Beta Pruning algorithms are, respectively, used to compute lower bounds for the maximum safety radius and feature robustness games. When working on the upper bound of the maximum safe radius problem, our tool demonstrates competitive performance against existing adversarial example crafting algorithms. Furthermore, we show how our framework can be deployed to evaluate pointwise robustness of neural networks in safety-critical applications such as traffic sign recognition in self-driving cars.
A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees
Theoretical Computer Science
cs
2,609
15
In category-theoretic models for the anyon systems proposed for topological quantum computing, the essential ingredients are two monoidal structures, $\oplus$ and $\otimes$. The former is symmetric but the latter is only braided, and $\otimes$ is required to distribute over $\oplus$. What are the appropriate coherence conditions for the distributivity isomorphisms? We came to this question working on a simplification of the category-theoretical foundation of topological quantum computing, which is the intended application of the research reported here. This question was answered by Laplaza when both monoidal structures are symmetric, but topological quantum computation depends crucially on $\otimes$ being only braided, not symmetric. We propose coherence conditions for distributivity in this situation, and we prove that our conditions are (a) strong enough to imply Laplaza's when the latter are suitably formulated, and (b) weak enough to hold when --- as in the categories used to model anyons --- the additive structure is that of an abelian category and the braided $\otimes$ is additive. Working on these results, we found a new redundancy in Laplaza's conditions.
Braided distributivity
Theoretical Computer Science
math
2,609
41
The $\ell$-component connectivity (or $\ell$-connectivity for short) of a graph $G$, denoted by $\kappa_\ell(G)$, is the minimum number of vertices whose removal from $G$ results in a disconnected graph with at least $\ell$ components or a graph with fewer than $\ell$ vertices. This generalization is a natural extension of the classical connectivity defined in term of minimum vertex-cut. As an application, the $\ell$-connectivity can be used to assess the vulnerability of a graph corresponding to the underlying topology of an interconnection network, and thus is an important issue for reliability and fault tolerance of the network. So far, only a little knowledge of results have been known on $\ell$-connectivity for particular classes of graphs and small $\ell$'s. In a previous work, we studied the $\ell$-connectivity on $n$-dimensional alternating group networks $AN_n$ and obtained the result $\kappa_3(AN_n)=2n-3$ for $n\geqslant 4$. In this sequel, we continue the work and show that $\kappa_4(AN_n)=3n-6$ for $n\geqslant 4$.
The 4-Component Connectivity of Alternating Group Networks
Theoretical Computer Science
cs
2,609
15
We show that discrete distributions on the $d$-dimensional non-negative integer lattice can be approximated arbitrarily well via the marginals of stationary distributions for various classes of stochastic chemical reaction networks. We begin by providing a class of detailed balanced networks and prove that they can approximate any discrete distribution to any desired accuracy. However, these detailed balanced constructions rely on the ability to initialize a system precisely, and are therefore susceptible to perturbations in the initial conditions. We therefore provide another construction based on the ability to approximate point mass distributions and prove that this construction is capable of approximating arbitrary discrete distributions for any choice of initial condition. In particular, the developed models are ergodic, so their limit distributions are robust to a finite number of perturbations over time in the counts of molecules.
Stochastic Chemical Reaction Networks for Robustly Approximating Arbitrary Probability Distributions
Theoretical Computer Science
cs
2,609
15
It is known that the class of all graphs not containing a graph $H$ as an induced subgraph is cop-bounded if and only if $H$ is a forest whose every component is a path. In this study, we characterize all sets $\mathscr{H}$ of graphs with some $k\in \mathbb{N}$ bounding the diameter of members of $\mathscr{H}$ from above, such that $\mathscr{H}$-free graphs, i.e. graphs with no member of $\mathscr{H}$ as an induced subgraph, are cop-bounded. This, in particular, gives a characterization of cop-bounded classes of graphs defined by a finite set of connected graphs as forbidden induced subgraphs. Furthermore, we extend our characterization to the case of cop-bounded classes of graphs defined by a set $\mathscr{H}$ of forbidden graphs such that there is $k\in\mathbb{N}$ bounding the diameter of components of members of $\mathscr{H}$ from above.
Cops and Robbers on Graphs with a Set of Forbidden Induced Subgraphs
Theoretical Computer Science
math
2,609
41
We introduce the notion of expandability in the context of automaton semigroups and groups: a word is k-expandable if one can append a suffix to it such that the size of the orbit under the action of the automaton increases by at least k. This definition is motivated by the question which {\omega}-words admit infinite orbits: for such a word, every prefix is expandable. In this paper, we show that, on input of a word u, an automaton T and a number k, it is decidable to check whether u is k-expandable with respect to the action of T. In fact, this can be done in exponential nondeterministic space. From this nondeterministic algorithm, we obtain a bound on the length of a potential orbit-increasing suffix x. Moreover, we investigate the situation if the automaton is invertible and generates a group. In this case, we give an algebraic characterization for the expandability of a word based on its shifted stabilizer. We also give a more efficient algorithm to decide expandability of a word in the case of automaton groups, which allows us to improve the upper bound on the maximal orbit-increasing suffix length. Then, we investigate the situation for reversible (and complete) automata and obtain that every word is expandable with respect to these automata. Finally, we give a lower bound example for the length of an orbit-increasing suffix.
Orbit Expandability of Automaton Semigroups and Groups
Theoretical Computer Science
cs
2,609
15
Phylogenetic networks are a generalization of phylogenetic trees to leaf-labeled directed acyclic graphs that represent ancestral relationships between species whose past includes non-tree-like events such as hybridization and horizontal gene transfer. Indeed, each phylogenetic network embeds a collection of phylogenetic trees. Referring to the collection of trees that a given phylogenetic network $N$ embeds as the display set of $N$, several questions in the context of the display set of $N$ have recently been analyzed. For example, the widely studied Tree-Containment problem asks if a given phylogenetic tree is contained in the display set of a given network. The focus of this paper are two questions that naturally arise in comparing the display sets of two phylogenetic networks. First, we analyze the problem of deciding if the display sets of two phylogenetic networks have a tree in common. Surprisingly, this problem turns out to be NP-complete even for two temporal normal networks. Second, we investigate the question of whether or not the display sets of two phylogenetic networks are equal. While we recently showed that this problem is polynomial-time solvable for a normal and a tree-child network, it is computationally hard in the general case. In establishing hardness, we show that the problem is contained in the second level of the polynomial-time hierarchy. Specifically, it is $\Pi_2^P$-complete. Along the way, we show that two other problems are also $\Pi_2^P$-complete, one of which being a generalization of Tree-Containment.
Displaying trees across two phylogenetic networks
Theoretical Computer Science
math
2,609
41
We formulate and explain the extended Burrows-Wheeler transform of Mantaci et al from the viewpoint of permutations on a chain taken as a union of partial order-preserving mappings. In so doing we establish a link with syntactic semigroups of languages that are themselves cyclic semigroups. We apply the extended transform with a view to generating de Bruijn words through inverting the transform.
Burrows-Wheeler transformations and de Bruijn words
Theoretical Computer Science
math
2,609
41
We study the problem of fairly allocating indivisible goods between groups of agents using the recently introduced relaxations of envy-freeness. We consider the existence of fair allocations under different assumptions on the valuations of the agents. In particular, our results cover cases of arbitrary monotonic, responsive, and additive valuations, while for the case of binary valuations we fully characterize the cardinalities of two groups of agents for which a fair allocation can be guaranteed with respect to both envy-freeness up to one good (EF1) and envy-freeness up to any good (EFX). Moreover, we introduce a new model where the agents are not partitioned into groups in advance, but instead the partition can be chosen in conjunction with the allocation of the goods. In this model, we show that for agents with arbitrary monotonic valuations, there is always a partition of the agents into two groups of any given sizes along with an EF1 allocation of the goods. We also provide an extension of this result to any number of groups.
Almost Envy-Freeness in Group Resource Allocation
Theoretical Computer Science
cs
2,609
15
This paper is devoted to the distributed complexity of finding an approximation of the maximum cut in graphs. A classical algorithm consists in letting each vertex choose its side of the cut uniformly at random. This does not require any communication and achieves an approximation ratio of at least $\tfrac12$ in average. When the graph is $d$-regular and triangle-free, a slightly better approximation ratio can be achieved with a randomized algorithm running in a single round. Here, we investigate the round complexity of deterministic distributed algorithms for MAXCUT in regular graphs. We first prove that if $G$ is $d$-regular, with $d$ even and fixed, no deterministic algorithm running in a constant number of rounds can achieve a constant approximation ratio. We then give a simple one-round deterministic algorithm achieving an approximation ratio of $\tfrac1{d}$ for $d$-regular graphs with $d$ odd. We show that this is best possible in several ways, and in particular no deterministic algorithm with approximation ratio $\tfrac1{d}+\epsilon$ (with $\epsilon>0$) can run in a constant number of rounds. We also prove results of a similar flavour for the MAXDICUT problem in regular oriented graphs, where we want to maximize the number of arcs oriented from the left part to the right part of the cut.
Local approximation of the Maximum Cut in regular graphs
Theoretical Computer Science
math
2,609
41
We present a framework which allows a uniform approach to the recently introduced concept of pseudo-repetitions on words in the morphic case. This framework is at the same time more general and simpler. We introduce the concept of a pseudo-solution and a pseudo-rank of an equation. In particular, this allows to prove that if a classical equation forces periodicity then it also forces pseudo-periodicity. Consequently, there is no need to investigate generalizations of important equations one by one.
Pseudo-solutions of word equations
Theoretical Computer Science
cs
2,609
15
The complexity of variants of 3-SAT and Not-All-Equal 3-SAT is well studied. However, in contrast, very little is known about the complexity of the problems' quantified counterparts. In the first part of this paper, we show that $\forall \exists$ 3-SAT is $\Pi_2^P$-complete even if (1) each variable appears exactly twice unnegated and exactly twice negated, (2) each clause is a disjunction of exactly three distinct variables, and (3) the number of universal variables is equal to the number of existential variables. Furthermore, we show that the problem remains $\Pi_2^P$-complete if (1a) each universal variable appears exactly once unnegated and exactly once negated, (1b) each existential variable appears exactly twice unnegated and exactly twice negated, and (2) and (3) remain unchanged. On the other hand, the problem becomes NP-complete for certain variants in which each universal variable appears exactly once. In the second part of the paper, we establish $\Pi_2^P$-completeness for $\forall \exists$ Not-All-Equal 3-SAT even if (1') the Boolean formula is linear and monotone, (2') each universal variable appears exactly once and each existential variable appears exactly three times, and (3') each clause is a disjunction of exactly three distinct variables that contains at most one universal variable. On the positive side, we uncover variants of $\forall \exists$ Not-All-Equal 3-SAT that are co-NP-complete or solvable in polynomial time.
Placing quantified variants of 3-SAT and Not-All-Equal 3-SAT in the polynomial hierarchy
Theoretical Computer Science
cs
2,609
15
The Burrows-Wheeler-Transform (BWT) is a reversible string transformation which plays a central role in text compression and is fundamental in many modern bioinformatics applications. The BWT is a permutation of the characters, which is in general better compressible and allows to answer several different query types more efficiently than the original string. It is easy to see that not every string is a BWT image, and exact characterizations of BWT images are known. We investigate a related combinatorial question. In many applications, a sentinel character dollar is added to mark the end of the string, and thus the BWT of a string ending with dollar contains exactly one dollar-character. Given a string w, we ask in which positions, if any, the dollar-character can be inserted to turn w into the BWT image of a word ending with dollar. We show that this depends only on the standard permutation of w and present a O(n log n)-time algorithm for identifying all such positions, improving on the naive quadratic time algorithm. We also give a combinatorial characterization of such positions and develop bounds on their number and value. This is an extended version of [Giuliani et al. ICTCS 2019].
When a Dollar Makes a BWT
Theoretical Computer Science
cs
2,609
15
In the Directed Disjoint Paths problem, we are given a digraph $D$ and a set of requests $\{(s_1, t_1), \ldots, (s_k, t_k)\}$, and the task is to find a collection of pairwise vertex-disjoint paths $\{P_1, \ldots, P_k\}$ such that each $P_i$ is a path from $s_i$ to $t_i$ in $D$. This problem is NP-complete for fixed $k=2$ and W[1]-hard with parameter $k$ in DAGs. A few positive results are known under restrictions on the input digraph, such as being planar or having bounded directed tree-width, or under relaxations of the problem, such as allowing for vertex congestion. Positive results are scarce, however, for general digraphs. In this article we propose a novel global congestion metric for the problem: we only require the paths to be "disjoint enough", in the sense that they must behave properly not in the whole graph, but in an unspecified part of size prescribed by a parameter. Namely, in the Disjoint Enough Directed Paths problem, given an $n$-vertex digraph $D$, a set of $k$ requests, and non-negative integers $d$ and $s$, the task is to find a collection of paths connecting the requests such that at least $d$ vertices of $D$ occur in at most $s$ paths of the collection. We study the parameterized complexity of this problem for a number of choices of the parameter, including the directed tree-width of $D$. Among other results, we show that the problem is W[1]-hard in DAGs with parameter $d$ and, on the positive side, we give an algorithm in time $\mathcal{O}(n^{d+2} \cdot k^{d\cdot s})$ and a kernel of size $d \cdot 2^{k-s}\cdot \binom{k}{s} + 2k$ in general digraphs. This latter result has consequences for the Steiner Network problem: we show that it is FPT parameterized by the number $k$ of terminals and $p$, where $p = n - q$ and $q$ is the size of the solution.
A relaxation of the Directed Disjoint Paths problem: a global congestion metric helps
Theoretical Computer Science
cs
2,609
15
In the Minimum Installation Path problem, we are given a graph $G$ with edge weights $w(.)$ and two vertices $s,t$ of $G$. We want to assign a non-negative power $p(v)$ to each vertex $v$ of $G$ so that the edges $uv$ such that $p(u)+p(v)$ is at least $w(uv)$ contain some $s$-$t$-path, and minimize the sum of assigned powers. In the Minimum Barrier Shrinkage problem, we are given a family of disks in the plane and two points $x$ and $y$ lying outside the disks. The task is to shrink the disks, each one possibly by a different amount, so that we can draw an $x$-$y$ curve that is disjoint from the interior of the shrunken disks, and the sum of the decreases in the radii is minimized. We show that the Minimum Installation Path and the Minimum Barrier Shrinkage problems (or, more precisely, the natural decision problems associated with them) are weakly NP-hard.
Hardness of Minimum Barrier Shrinkage and Minimum Installation Path
Theoretical Computer Science
cs
2,609
15
A set $D\subseteq V$ of a graph $G=(V,E)$ is called a neighborhood total dominating set of $G$ if $D$ is a dominating set and the subgraph of $G$ induced by the open neighborhood of $D$ has no isolated vertex. Given a graph $G$, \textsc{Min-NTDS} is the problem of finding a neighborhood total dominating set of $G$ of minimum cardinality. The decision version of \textsc{Min-NTDS} is known to be \textsf{NP}-complete for bipartite graphs and chordal graphs. In this paper, we extend this \textsf{NP}-completeness result to undirected path graphs, chordal bipartite graphs, and planar graphs. We also present a linear time algorithm for computing a minimum neighborhood total dominating set in proper interval graphs. We show that for a given graph $G=(V,E)$, \textsc{Min-NTDS} cannot be approximated within a factor of $(1-\varepsilon)\log |V|$, unless \textsf{NP$\subseteq$DTIME($|V|^{O(\log \log |V|)}$)} and can be approximated within a factor of $O(\log \Delta)$, where $\Delta$ is the maximum degree of the graph $G$. Finally, we show that \textsc{Min-NTDS} is \textsf{APX}-complete for graphs of degree at most $3$.
Algorithm and hardness results on neighborhood total domination in graphs
Theoretical Computer Science
cs
2,609
15
Lyndon words have been largely investigated and showned to be a useful tool to prove interesting combinatorial properties of words. In this paper we state new properties of both Lyndon and inverse Lyndon factorizations of a word $w$, with the aim of exploring their use in some classical queries on $w$. The main property we prove is related to a classical query on words. We prove that there are relations between the length of the longest common extension (or longest common prefix) $lcp(x,y)$ of two different suffixes $x,y$ of a word $w$ and the maximum length $\mathcal{M}$ of two consecutive factors of the inverse Lyndon factorization of $w$. More precisely, $\mathcal{M}$ is an upper bound on the length of $lcp(x,y)$. This result is in some sense stronger than the compatibility property, proved by Mantaci, Restivo, Rosone and Sciortino for the Lyndon factorization and here for the inverse Lyndon factorization. Roughly, the compatibility property allows us to extend the mutual order between local suffixes of (inverse) Lyndon factors to the suffixes of the whole word. A main tool used in the proof of the above results is a property that we state for factors $m_i$ with nonempty borders in an inverse Lyndon factorization: a nonempty border of $m_i$ cannot be a prefix of the next factor $m_{i+1}$. The last property we prove shows that if two words share a common overlap, then their Lyndon factorizations can be used to capture the common overlap of the two words. The above results open to the study of new applications of Lyndon words and inverse Lyndon words in the field of string comparison.
Lyndon words versus inverse Lyndon words: queries on suffixes and bordered words
Theoretical Computer Science
cs
2,609
15
Research in the area of secure multi-party computation using a deck of playing cards, often called card-based cryptography, started from the introduction of the five-card trick protocol to compute the logical AND function by den Boer in 1989. Since then, many card-based protocols to compute various functions have been developed. In this paper, we propose two new protocols that securely compute the $n$-variable equality function (determining whether all inputs are equal) $E: \{0,1\}^n \rightarrow \{0,1\}$ using $2n$ cards. The first protocol can be generalized to compute any doubly symmetric function $f: \{0,1\}^n \rightarrow \mathbb{Z}$ using $2n$ cards, and any symmetric function $f: \{0,1\}^n \rightarrow \mathbb{Z}$ using $2n+2$ cards. The second protocol can be generalized to compute the $k$-candidate $n$-variable equality function $E: (\mathbb{Z}/k\mathbb{Z})^n \rightarrow \{0,1\}$ using $2 \lceil \lg k \rceil n$ cards.
Securely Computing the $n$-Variable Equality Function with $2n$ Cards
Theoretical Computer Science
cs
2,609
15
When elementary quantum systems, such as polarized photons, are used to transmit digital information, the uncertainty principle gives rise to novel cryptographic phenomena unachievable with traditional transmission media, e.g. a communications channel on which it is impossible in principle to eavesdrop without a high probability of disturbing the transmission in such a way as to be detected. Such a quantum channel can be used in conjunction with ordinary insecure classical channels to distribute random key information between two users with the assurance that it remains unknown to anyone else, even when the users share no secret information initially. We also present a protocol for coin-tossing by exchange of quantum messages, which is secure against traditional kinds of cheating, even by an opponent with unlimited computing power, but ironically can be subverted by use of a still subtler quantum phenomenon, the Einstein-Podolsky-Rosen paradox.
Quantum cryptography: Public key distribution and coin tossing
Theoretical Computer Science
quant-ph
2,609
65
We introduce a formal logical language, called conditional probability logic (CPL), which extends first-order logic and which can express probabilities, conditional probabilities and which can compare conditional probabilities. Intuitively speaking, although formal details are different, CPL can express the same kind of statements as some languages which have been considered in the artificial intelligence community. We also consider a way of making precise the notion of lifted Bayesian network, where this notion is a type of (lifted) probabilistic graphical model used in machine learning, data mining and artificial intelligence. A lifted Bayesian network (in the sense defined here) determines, in a natural way, a probability distribution on the set of all structures (in the sense of first-order logic) with a common finite domain $D$. Our main result is that for every "noncritical" CPL-formula $\varphi(\bar{x})$ there is a quantifier-free formula $\varphi^*(\bar{x})$ which is "almost surely" equivalent to $\varphi(\bar{x})$ as the cardinality of $D$ tends towards infinity. This is relevant for the problem of making probabilistic inferences on large domains $D$, because (a) the problem of evaluating, by "brute force", the probability of $\varphi(\bar{x})$ being true for some sequence $\bar{d}$ of elements from $D$ has, in general, (highly) exponential time complexity in the cardinality of $D$, and (b) the corresponding probability for the quantifier-free $\varphi^*(\bar{x})$ depends only on the lifted Bayesian network and not on $D$. The main result has two corollaries, one of which is a convergence law (and zero-one law) for noncritial CPL-formulas.
Conditional probability logic, lifted bayesian networks and almost sure quantifier elimination
Theoretical Computer Science
math
2,609
41
We define exact sequences in the enchilada category of $C^*$-algebras and correspondences, and prove that the reduced-crossed-product functor is not exact for the enchilada categories. Our motivation was to determine whether we can have a better understanding of the Baum-Connes conjecture by using enchilada categories. Along the way we prove numerous results showing that the enchilada category is rather strange.
Exact sequences in the Enchilada category
Theory and Applications of Categories
math
2,610
41
Let $\mathsf{PreOrd}(\mathbb C)$ be the category of internal preorders in an exact category $\mathbb C$. We show that the pair $(\mathsf{Eq}(\mathbb C), \mathsf{ParOrd}(\mathbb C))$ is a pretorsion theory in $\mathsf{PreOrd}(\mathbb C)$, where $\mathsf{Eq}(\mathbb C)$ and $\mathsf{ParOrd}(\mathbb C)$) are the full subcategories of internal equivalence relations and of internal partial orders in $\mathbb C$, respectively. We observe that $\mathsf{ParOrd}(\mathbb C)$ is a reflective subcategory of $\mathsf{PreOrd}(\mathbb C)$ such that each component of the unit of the adjunction is a pullback-stable regular epimorphism. The reflector $F:\mathsf{PreOrd}(\mathbb C)\to \mathsf{ParOrd}(\mathbb C)$ turns out to have stable units in the sense of Cassidy, H\'ebert and Kelly, thus inducing an admissible categorical Galois structure. In particular, when $\mathbb C$ is the category $\mathsf{Set}$ of sets, we show that this reflection induces a monotone-light factorization system (in the sense of Carboni, Janelidze, Kelly and Par\'e) in $\mathsf{PreOrd}(\mathsf{Set})$. A topological interpretation of our results in the category of Alexandroff-discrete spaces is also given, via the well-known isomorphism between this latter category and $\mathsf{PreOrd}(\mathsf{Set})$.
A new Galois structure in the category of internal preorders
Theory and Applications of Categories
math
2,610
41
We prove that the folk model category structure on the category of strict $\omega$-categories, introduced by Lafont, M\'etayer and Worytkiewicz, is monoidal, first, for the Gray tensor product and, second, for the join of $\omega$-categories, introduced by the first author and Maltsiniotis. We moreover show that the Gray tensor product induces, by adjunction, a tensor product of strict $(m,n)$-categories and that this tensor product is also compatible with the folk model category structure. In particular, we get a monoidal model category structure on the category of strict $\omega$-groupoids. We prove that this monoidal model category structure satisfies the monoid axiom, so that the category of Gray monoids, studied by the second author, bears a natural model category structure.
The folk model category structure on strict $\omega$-categories is monoidal
Theory and Applications of Categories
math
2,610
41
In this article the notion of virtual double category (also known as fc-multicategory) is extended as follows. While cells in a virtual double category classically have a horizontal multi-source and single horizontal target, the notion of augmented virtual double category introduced here extends the latter notion by including cells with empty horizontal target as well. Any augmented virtual double category comes with a built-in notion of "locally small object" and we describe advantages of using augmented virtual double categories as a setting for formal category rather than 2-categories, which are classically equipped with a notion of "admissible object" by means of a yoneda structure in the sense of Street and Walters. An object is locally small precisely if it admits a horizontal unit, and we show that the notions of augmented virtual double category and virtual double category coincide in the presence of all horizontal units. Without assuming the existence of horizontal units we show that most of the basic theory for virtual double categories, such as that of restriction and composition of horizontal morphisms, extends to augmented virtual double categories. We introduce and study in augmented virtual double categories the notion of "pointwise" composition of horizontal morphisms, which formalises the classical composition of profunctors given by the coend formula.
Augmented virtual double categories
Theory and Applications of Categories
math
2,610
41
One goal of applied category theory is to better understand networks appearing throughout science and engineering. Here we introduce "structured cospans" as a way to study networks with inputs and outputs. Given a functor $L \colon \mathsf{A} \to \mathsf{X}$, a structured cospan is a diagram in $\mathsf{X}$ of the form $L(a) \rightarrow x \leftarrow L(b)$. If $\mathsf{A}$ and $\mathsf{X}$ have finite colimits and $L$ is a left adjoint, we obtain a symmetric monoidal category whose objects are those of $\mathsf{A}$ and whose morphisms are isomorphism classes of structured cospans. This is a hypergraph category. However, it arises from a more fundamental structure: a symmetric monoidal double category where the horizontal 1-cells are structured cospans. We show how structured cospans solve certain problems in the closely related formalism of "decorated cospans", and explain how they work in some examples: electrical circuits, Petri nets, and chemical reaction networks.
Structured Cospans
Theory and Applications of Categories
math
2,610
41
Split opfibrations are functors equipped with a suitable choice of opcartesian lifts. The purpose of this paper is to characterise internal split opfibrations through separating the structure of a suitable choice of lifts from the property of these lifts being opcartesian. The underlying structure of an internal split opfibration is captured by an internal functor equipped with an internal cofunctor, while the property may be expressed as a pullback condition, akin to the simple condition on an internal functor to be an internal discrete opfibration. Furthermore, this approach provides two additional characterisations of internal split opfibrations, via the d\'ecalage construction and strict factorisation systems. For small categories, this theory clarifies several aspects of delta lenses which arise in computer science.
Internal split opfibrations and cofunctors
Theory and Applications of Categories
math
2,610
41
The category of involutive non-commutative sets encodes the structure of an involution compatible with a (co)associative (co)multiplication. We prove that the category of involutive bimonoids in a symmetric monoidal category is equivalent to the category of algebras over a PROP constructed from the category of involutive non-commutative sets.
PROPs for involutive monoids and involutive bimonoids
Theory and Applications of Categories
math
2,610
41
We show that the law of excluded middle holds in Voevodsky's simplicial model of type theory. As a corollary, excluded middle is compatible with univalence.
The law of excluded middle in the simplicial model of type theory
Theory and Applications of Categories
math
2,610
41
Restriction categories were established to handle maps that are partially defined with respect to composition. Tensor topology realises that monoidal categories have an intrinsic notion of space, and deals with objects and maps that are partially defined with respect to this spatial structure. We introduce a construction that turns a firm monoidal category into a restriction category and axiomatise the monoidal restriction categories that arise this way, called tensor-restriction categories.
Tensor-restriction categories
Theory and Applications of Categories
math
2,610
41
We generalize the notion of ends and coends in category theory to the realm of module categories over finite tensor categories. We call this new concept "module (co)end". This tool allows us to give different proofs to several known results in the theory of representations of finite tensor categories. As a new application, we present a description of the relative Serre functor for module categories in terms of a module coend, in a analogous way as a Morita invariant description of the Nakayama functor of abelian categories presented in [J. Fuchs, G. Schaumann and C. Schweigert, Eilenberg-Watts calculus for finite categories and a bimodule Radford S^4 theorem, Trans. Amer. Math. Soc. 373 (2020), 1-40]
(Co)ends for representations of tensor categories
Theory and Applications of Categories
math
2,610
41
We go back to the roots of enriched category theory and study categories enriched in chain complexes; that is, we deal with differential graded categories (DG-categories for short). In particular, we recall weighted colimits and provide examples. We solve the 50 year old question of how to characterize Cauchy complete DG-categories in terms of existence of some specific finite absolute colimits. As well as the interactions between absolute weighted colimits, we also examine the total complex of a chain complex in a DG-category as a non-absolute weighted colimit.
Cauchy completeness for DG-categories
Theory and Applications of Categories
math
2,610
41
The reflexive completion of a category consists of the Set-valued functors on it that are canonically isomorphic to their double conjugate. After reviewing both this construction and Isbell conjugacy itself, we give new examples and revisit Isbell's main results from 1960 in a modern categorical context. We establish the sense in which reflexive completion is functorial, and find conditions under which two categories have equivalent reflexive completions. We describe the relationship between the reflexive and Cauchy completions, determine exactly which limits and colimits exist in an arbitrary reflexive completion, and make precise the sense in which the reflexive completion of a category is the intersection of the categories of covariant and contravariant functors on it.
Isbell conjugacy and the reflexive completion
Theory and Applications of Categories
math
2,610
41
We determine the existential completion of a primary doctrine, and we prove that the 2-monad obtained from it is lax-idempotent, and that the 2-category of existential doctrines is isomorphic to the 2-category of algebras for this 2-monad. We also show that the existential completion of an elementary doctrine is again elementary. Finally we extend the notion of exact completion of an elementary existential doctrine to an arbitrary elementary doctrine.
The existential completion
Theory and Applications of Categories
math
2,610
41
One can associate to any strict globular $\omega$-category three augmented simplicial nerves called the globular nerve, the branching and the merging semi-cubical nerves. If this strict globular $\omega$-category is freely generated by a precubical set, then the corresponding homology theories contain different informations about the geometry of the higher dimensional automaton modeled by the precubical set. Adding inverses in this $\omega$-category to any morphism of dimension greater than 2 and with respect to any composition laws of dimension greater than 1 does not change these homology theories. In such a framework, the globular nerve always satisfies the Kan condition. On the other hand, both branching and merging nerves never satisfy it, except in some very particular and uninteresting situations. In this paper, we introduce two new nerves (the branching and merging semi-globular nerves) satisfying the Kan condition and having conjecturally the same simplicial homology as the branching and merging semi-cubical nerves respectively in such framework. The latter conjecture is related to the thin elements conjecture already introduced in our previous papers.
The branching nerve of HDA and the Kan condition
Theory and Applications of Categories
math
2,610
41
Many people have proposed definitions of `weak n-category'. Ten of them are presented here. Each definition is given in two pages, with a further two pages on what happens when n = 0, 1, or 2. The definitions can be read independently. Chatty bibliography follows.
A Survey of Definitions of n-Category
Theory and Applications of Categories
math
2,610
41
This paper constructs model structures on the categories of coalgebras and pointed irreducible coalgebras over an operad. The underlying chain-complex is assumed to be unbounded and the results for bounded coalgebras over an operad are derived from the unbounded case.
Homotopy theory of coalgebras over operads
Theory and Applications of Categories
math
2,610
41
A 2-group is a "categorified" version of a group, in which the underlying set G has been replaced by a category and the multiplication map has been replaced by a functor. Various versions of this notion have already been explored; our goal here is to provide a detailed introduction to two, which we call "weak" and "coherent" 2-groups. A weak 2-group is a weak monoidal category in which every morphism has an inverse and every object x has a "weak inverse": an object y such that x tensor y and y tensor x are isomorphic to 1. A coherent 2-group is a weak 2-group in which every object x is equipped with a specified weak inverse x* and isomorphisms i_x: 1 -> x tensor x* and e_x: x* tensor x -> 1 forming an adjunction. We describe 2-categories of weak and coherent 2-groups and an "improvement" 2-functor that turns weak 2-groups into coherent ones, and prove that this 2-functor is a 2-equivalence of 2-categories. We internalize the concept of coherent 2-group, which gives a quick way to define Lie 2-groups. We give a tour of examples, including the "fundamental 2-group" of a space and various Lie 2-groups. We also explain how coherent 2-groups can be classified in terms of 3rd cohomology classes in group cohomology. Finally, using this classification, we construct for any connected and simply-connected compact simple Lie group G a family of 2-groups G_hbar (for integral values of hbar) having G as its group of objects and U(1) as the group of automorphisms of its identity object. These 2-groups are built using Chern-Simons theory, and are closely related to the Lie 2-algebras g_hbar (for real hbar) described in a companion paper.
Higher-Dimensional Algebra V: 2-Groups
Theory and Applications of Categories
math
2,610
41
For a differential graded k-quiver Q we define the free A-infinity-category FQ generated by Q. The main result is that for an arbitrary A-infinity-category A the restriction A-infinity-functor A_\infty(FQ,A) -> A_1(Q,A) is an equivalence, where objects of the last A-infinity-category are morphisms of differential graded k-quivers Q -> A.
Free ${A}_\infty$-categories
Theory and Applications of Categories
math
2,610
41
The representation theory for categorical groups is constructed. Each categorical group determines a monoidal bicategory of representations. Typically, these categories contain representations which are indecomposable but not irreducible. A simple example is computed in explicit detail.
Categorical representations of categorical groups
Theory and Applications of Categories
math
2,610
41
In order to apply nonstandard methods to modern algebraic geometry, as a first step in this paper we study the applications of nonstandard constructions to category theory. It turns out that many categorial properties are well behaved under enlargements.
Enlargements of Categories
Theory and Applications of Categories
math
2,610
41
Anti-unification refers to the process of generalizing two (or more) goals into a single, more general, goal that captures some of the structure that is common to all initial goals. In general one is typically interested in computing what is often called a most specific generalization, that is a generalization that captures a maximal amount of shared structure. In this work we address the problem of anti-unification in CLP, where goals can be seen as unordered sets of atoms and/or constraints. We show that while the concept of a most specific generalization can easily be defined in this context, computing it becomes an NP-complete problem. We subsequently introduce a generalization algorithm that computes a well-defined abstraction whose computation can be bound to a polynomial execution time. Initial experiments show that even a naive implementation of our algorithm produces acceptable generalizations in an efficient way. Under consideration for acceptance in TPLP.
Anti-unification in Constraint Logic Programming
Theory and Practice of Logic Programming
cs
2,611
15
Answer Set Programming (ASP) is a well-known declarative formalism in logic programming. Efficient implementations made it possible to apply ASP in many scenarios, ranging from deductive databases applications to the solution of hard combinatorial problems. State-of-the-art ASP systems are based on the traditional ground\&solve approach and are general-purpose implementations, i.e., they are essentially built once for any kind of input program. In this paper, we propose an extended architecture for ASP systems, in which parts of the input program are compiled into an ad-hoc evaluation algorithm (i.e., we obtain a specific binary for a given program), and might not be subject to the grounding step. To this end, we identify a condition that allows the compilation of a sub-program, and present the related partial compilation technique. Importantly, we have implemented the new approach on top of a well-known ASP solver and conducted an experimental analysis on publicly-available benchmarks. Results show that our compilation-based approach improves on the state of the art in various scenarios, including cases in which the input program is stratified or the grounding blow-up makes the evaluation unpractical with traditional ASP systems.
Partial Compilation of ASP Programs
Theory and Practice of Logic Programming
cs
2,611
15
This paper proposes the use of Constraint Logic Programming (CLP) to model SQL queries in a data-independent abstract layer by focusing on some semantic properties for signalling possible errors in such queries. First, we define a translation from SQL to Datalog, and from Datalog to CLP, so that solving this CLP program will give information about inconsistency, tautology, and possible simplifications. We use different constraint domains which are mapped to SQL types, and propose them to cooperate for improving accuracy. Our approach leverages a deductive system that includes SQL and Datalog, and we present an implementation in this system which is currently being tested in classroom, showing its advantages and differences with respect to other approaches, as well as some performance data. This paper is under consideration for acceptance in TPLP.
Applying Constraint Logic Programming to SQL Semantic Analysis
Theory and Practice of Logic Programming
cs
2,611
15
Epistemic Logic Programs (ELPs) extend Answer Set Programming (ASP) with epistemic negation and have received renewed interest in recent years. This led to the development of new research and efficient solving systems for ELPs. In practice, ELPs are often written in a modular way, where each module interacts with other modules by accepting sets of facts as input, and passing on sets of facts as output. An interesting question then presents itself: under which conditions can such a module be replaced by another one without changing the outcome, for any set of input facts? This problem is known as uniform equivalence, and has been studied extensively for ASP. For ELPs, however, such an investigation is, as of yet, missing. In this paper, we therefore propose a characterization of uniform equivalence that can be directly applied to the language of state-of-the-art ELP solvers. We also investigate the computational complexity of deciding uniform equivalence for two ELPs, and show that it is on the third level of the polynomial hierarchy.
On Uniform Equivalence of Epistemic Logic Programs
Theory and Practice of Logic Programming
cs
2,611
15
We present a system for online composite event recognition over streaming positions of commercial vehicles. Our system employs a data enrichment module, augmenting the mobility data with external information, such as weather data and proximity to points of interest. In addition, the composite event recognition module, based on a highly optimised logic programming implementation of the Event Calculus, consumes the enriched data and identifies activities that are beneficial in fleet management applications. We evaluate our system on large, real-world data from commercial vehicles, and illustrate its efficiency. Under consideration for acceptance in TPLP.
Online Event Recognition from Moving Vehicles: Application Paper
Theory and Practice of Logic Programming
cs
2,611
15
A common feature in Answer Set Programming is the use of a second negation, stronger than default negation and sometimes called explicit, strong or classical negation. This explicit negation is normally used in front of atoms, rather than allowing its use as a regular operator. In this paper we consider the arbitrary combination of explicit negation with nested expressions, as those defined by Lifschitz, Tang and Turner. We extend the concept of reduct for this new syntax and then prove that it can be captured by an extension of Equilibrium Logic with this second negation. We study some properties of this variant and compare to the already known combination of Equilibrium Logic with Nelson's strong negation. Under consideration for acceptance in TPLP.
Revisiting Explicit Negation in Answer Set Programming
Theory and Practice of Logic Programming
cs
2,611
15
The input language of the answer set solver clingo is based on the definition of a stable model proposed by Paolo Ferraris. The semantics of the ASP-Core language, developed by the ASP Standardization Working Group, uses the approach to stable models due to Wolfgang Faber, Nicola Leone, and Gerald Pfeifer. The two languages are based on different versions of the stable model semantics, and the ASP-Core document requires, "for the sake of an uncontroversial semantics," that programs avoid the use of recursion through aggregates. In this paper we prove that the absence of recursion through aggregates does indeed guarantee the equivalence between the two versions of the stable model semantics, and show how that requirement can be relaxed without violating the equivalence property. The paper is under consideration for publication in Theory and Practice of Logic Programming.
Relating Two Dialects of Answer Set Programming
Theory and Practice of Logic Programming
cs
2,611
15
Stream reasoning systems are designed for complex decision-making from possibly infinite, dynamic streams of data. Modern approaches to stream reasoning are usually performing their computations using stand-alone solvers, which incrementally update their internal state and return results as the new portions of data streams are pushed. However, the performance of such approaches degrades quickly as the rates of the input data and the complexity of decision problems are growing. This problem was already recognized in the area of stream processing, where systems became distributed in order to allocate vast computing resources provided by clouds. In this paper we propose a distributed approach to stream reasoning that can efficiently split computations among different solvers communicating their results over data streams. Moreover, in order to increase the throughput of the distributed system, we suggest an interval-based semantics for the LARS language, which enables significant reductions of network traffic. Performed evaluations indicate that the distributed stream reasoning significantly outperforms existing stand-alone LARS solvers when the complexity of decision problems and the rate of incoming data are increasing. Under consideration for acceptance in Theory and Practice of Logic Programming.
A Distributed Approach to LARS Stream Reasoning (System paper)
Theory and Practice of Logic Programming
cs
2,611
15
With the more and more growing demand for semantic Web services over large databases, an efficient evaluation of Datalog queries is arousing a renewed interest among researchers and industry experts. In this scenario, to reduce memory consumption and possibly optimize execution times, the paper proposes novel techniques to determine an optimal indexing schema for the underlying database together with suitable body-orderings for the Datalog rules. The new approach is compared with the standard execution plans implemented in DLV over widely used ontological benchmarks. The results confirm that the memory usage can be significantly reduced without paying any cost in efficiency. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
Precomputing Datalog evaluation plans in large-scale scenarios
Theory and Practice of Logic Programming
cs
2,611
15
Whereas the operation of forgetting has recently seen a considerable amount of attention in the context of Answer Set Programming (ASP), most of it has focused on theoretical aspects, leaving the practical issues largely untouched. Recent studies include results about what sets of properties operators should satisfy, as well as the abstract characterization of several operators and their theoretical limits. However, no concrete operators have been investigated. In this paper, we address this issue by presenting the first concrete operator that satisfies strong persistence - a property that seems to best capture the essence of forgetting in the context of ASP - whenever this is possible, and many other important properties. The operator is syntactic, limiting the computation of the forgetting result to manipulating the rules in which the atoms to be forgotten occur, naturally yielding a forgetting result that is close to the original program. This paper is under consideration for acceptance in TPLP.
A Syntactic Operator for Forgetting that Satisfies Strong Persistence
Theory and Practice of Logic Programming
cs
2,611
15
We investigate the problem of cost-optimal planning in ASP. Current ASP planners can be trivially extended to a cost-optimal one by adding weak constraints, but only for a given makespan (number of steps). It is desirable to have a planner that guarantees global optimality. In this paper, we present two approaches to addressing this problem. First, we show how to engineer a cost-optimal planner composed of two ASP programs running in parallel. Using lessons learned from this, we then develop an entirely new approach to cost-optimal planning, stepless planning, which is completely free of makespan. Experiments to compare the two approaches with the only known cost-optimal planner in SAT reveal good potentials for stepless planning in ASP. The paper is under consideration for acceptance in TPLP.
Domain-Independent Cost-Optimal Planning in ASP
Theory and Practice of Logic Programming
cs
2,611
15
Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
Compositional specification in rewriting logic
Theory and Practice of Logic Programming
cs
2,611
15
Standardization of solver input languages has been a main driver for the growth of several areas within knowledge representation and reasoning, fostering the exploitation in actual applications. In this document we present the ASP-Core-2 standard input language for Answer Set Programming, which has been adopted in ASP Competition events since 2013.
ASP-Core-2 Input Language Format
Theory and Practice of Logic Programming
cs
2,611
15
Powerful formalisms for abstract argumentation have been proposed, among them abstract dialectical frameworks (ADFs) that allow for a succinct and flexible specification of the relationship between arguments, and the GRAPPA framework which allows argumentation scenarios to be represented as arbitrary edge-labelled graphs. The complexity of ADFs and GRAPPA is located beyond NP and ranges up to the third level of the polynomial hierarchy. The combined complexity of Answer Set Programming (ASP) exactly matches this complexity when programs are restricted to predicates of bounded arity. In this paper, we exploit this coincidence and present novel efficient translations from ADFs and GRAPPA to ASP. More specifically, we provide reductions for the five main ADF semantics of admissible, complete, preferred, grounded, and stable interpretations, and exemplify how these reductions need to be adapted for GRAPPA for the admissible, complete and preferred semantics. Under consideration in Theory and Practice of Logic Programming (TPLP).
Solving Advanced Argumentation Problems with Answer Set Programming
Theory and Practice of Logic Programming
cs
2,611
15
Recent technological advances have led to unprecedented amounts of generated data that originate from the Web, sensor networks and social media. Analytics in terms of defeasible reasoning - for example for decision making - could provide richer knowledge of the underlying domain. Traditionally, defeasible reasoning has focused on complex knowledge structures over small to medium amounts of data, but recent research efforts have attempted to parallelize the reasoning process over theories with large numbers of facts. Such work has shown that traditional defeasible logics come with overheads that limit scalability. In this work, we design a new logic for defeasible reasoning, thus ensuring scalability by design. We establish several properties of the logic, including its relation to existing defeasible logics. Our experimental results indicate that our approach is indeed scalable and defeasible reasoning can be applied to billions of facts.
Rethinking Defeasible Reasoning: A Scalable Approach
Theory and Practice of Logic Programming
cs
2,611
15
Answer set programming (ASP) is a paradigm for modeling knowledge intensive domains and solving challenging reasoning problems. In ASP solving, a typical strategy is to preprocess problem instances by rewriting complex rules into simpler ones. Normalization is a rewriting process that removes extended rule types altogether in favor of normal rules. Recently, such techniques led to optimization rewriting in ASP, where the goal is to boost answer set optimization by refactoring the optimization criteria of interest. In this paper, we present a novel, general, and effective technique for optimization rewriting based on comparator networks, which are specific kinds of circuits for reordering the elements of vectors. The idea is to connect an ASP encoding of a comparator network to the literals being optimized and to redistribute the weights of these literals over the structure of the network. The encoding captures information about the weight of an answer set in auxiliary atoms in a structured way that is proven to yield exponential improvements during branch-and-bound optimization on an infinite family of example programs. The used comparator network can be tuned freely, e.g., to find the best size for a given benchmark class. Experiments show accelerated optimization performance on several benchmark problems.
Boosting Answer Set Optimization with Weighted Comparator Networks
Theory and Practice of Logic Programming
cs
2,611
15
We present a solution to real-world train scheduling problems, involving routing, scheduling, and optimization, based on Answer Set Programming (ASP). To this end, we pursue a hybrid approach that extends ASP with difference constraints to account for a fine-grained timing. More precisely, we exemplarily show how the hybrid ASP system clingo[DL] can be used to tackle demanding planning-and-scheduling problems. In particular, we investigate how to boost performance by combining distinct ASP solving techniques, such as approximations and heuristics, with preprocessing and encoding techniques for tackling large-scale, real-world train scheduling instances. Under consideration in Theory and Practice of Logic Programming (TPLP)
Train Scheduling with Hybrid Answer Set Programming
Theory and Practice of Logic Programming
cs
2,611
15
Abstraction is a well-known approach to simplify a complex problem by over-approximating it with a deliberate loss of information. It was not considered so far in Answer Set Programming (ASP), a convenient tool for problem solving. We introduce a method to automatically abstract ASP programs that preserves their structure by reducing the vocabulary while ensuring an over-approximation (i.e., each original answer set maps to some abstract answer set). This allows for generating partial answer set candidates that can help with approximation of reasoning. Computing the abstract answer sets is intuitively easier due to a smaller search space, at the cost of encountering spurious answer sets. Faithful (non-spurious) abstractions may be used to represent projected answer sets and to guide solvers in answer set construction. For dealing with spurious answer sets, we employ an ASP debugging approach to help with abstraction refinement, which determines atoms as badly omitted and adds them back in the abstraction. As a show case, we apply abstraction to explain unsatisfiability of ASP programs in terms of blocker sets, which are the sets of atoms such that abstraction to them preserves unsatisfiability. Their usefulness is demonstrated by experimental results.
Omission-based Abstraction for Answer Set Programs
Theory and Practice of Logic Programming
cs
2,611
15
Existential rules are a positive fragment of first-order logic that generalizes function-free Horn rules by allowing existentially quantified variables in rule heads. This family of languages has recently attracted significant interest in the context of ontology-mediated query answering. Forward chaining, also known as the chase, is a fundamental tool for computing universal models of knowledge bases, which consist of existential rules and facts. Several chase variants have been defined, which differ on the way they handle redundancies. A set of existential rules is bounded if it ensures the existence of a bound on the depth of the chase, independently from any set of facts. Deciding if a set of rules is bounded is an undecidable problem for all chase variants. Nevertheless, when computing universal models, knowing that a set of rules is bounded for some chase variant does not help much in practice if the bound remains unknown or even very large. Hence, we investigate the decidability of the k-boundedness problem, which asks whether the depth of the chase for a given set of rules is bounded by an integer k. We identify a general property which, when satisfied by a chase variant, leads to the decidability of k-boundedness. We then show that the main chase variants satisfy this property, namely the oblivious, semi-oblivious (aka Skolem), and restricted chase, as well as their breadth-first versions. This paper is under consideration for publication in Theory and Practice of Logic Programming.
Characterizing Boundedness in Chase Variants
Theory and Practice of Logic Programming
cs
2,611
15
Concolic testing is a popular software verification technique based on a combination of concrete and symbolic execution. Its main focus is finding bugs and generating test cases with the aim of maximizing code coverage. A previous approach to concolic testing in logic programming was not sound because it only dealt with positive constraints (by means of substitutions) but could not represent negative constraints. In this paper, we present a novel framework for concolic testing of CLP programs that generalizes the previous technique. In the CLP setting, one can represent both positive and negative constraints in a natural way, thus giving rise to a sound and (potentially) more efficient technique. Defining verification and testing techniques for CLP programs is increasingly relevant since this framework is becoming popular as an intermediate representation to analyze programs written in other programming paradigms.
Concolic Testing in CLP
Theory and Practice of Logic Programming
cs
2,611
15
On top of a neural network-based dependency parser and a graph-based natural language processing module we design a Prolog-based dialog engine that explores interactively a ranked fact database extracted from a text document. We reorganize dependency graphs to focus on the most relevant content elements of a sentence and integrate sentence identifiers as graph nodes. Additionally, after ranking the graph we take advantage of the implicit semantic information that dependency links and WordNet bring in the form of subject-verb-object, is-a and part-of relations. Working on the Prolog facts and their inferred consequences, the dialog engine specializes the text graph with respect to a query and reveals interactively the document's most relevant content elements. The open-source code of the integrated system is available at https://github.com/ptarau/DeepRank . Under consideration in Theory and Practice of Logic Programming (TPLP).
Interactive Text Graph Mining with a Prolog-based Dialog Engine
Theory and Practice of Logic Programming
cs
2,611
15
Let V be a smooth variety defined over the real numbers. Every algebraic vector bundle on V induces a complex vector bundle on the underlying topological space V(C), and the involution coming from complex conjugation makes it a Real vector bundle in the sense of Atiyah. This association leads to a natural map from the algebraic K-theory of V to Atiyah's ``Real K-theory'' of V(C). Passing to finite coefficients Z/m, we show that the maps from K_n(V ; Z/m) to KR ^{-n}(V(C);Z/m) are isomorphisms when n is at least the dimension of V, at least when m is a power of two. Our key descent result is a comparison of the K-theory space of V with the homotopy fixed points (for complex conjugation) of the K-theory space of the complex variety V(C). When V is the affine variety of the d-sphere S, it turns out that KR*(V(C))=KO*(S). In this case we show that for all nonnegative n we have K_n(V ; Z/m) = KO^{-n}(S ; Z/m).
Algebraic and Real K-theory of Algebraic varieties
Topology
math
2,626
41
For $\Gamma$ a relatively hyperbolic group, we construct a model for the universal space among $\Gamma$-spaces with isotropy on the family VC of virtually cyclic subgroups of $\Gamma$. We provide a recipe for identifying the maximal infinite virtually cyclic subgroups of Coxeter groups which are lattices in $O^+(n,1)= \iso(\mathbb H^n)$. We use the information we obtain to explicitly compute the lower algebraic K-theory of the Coxeter group $\gt$ (a non-uniform lattice in $O^+(3,1)$). Part of this computation involves calculating certain Waldhausen Nil-groups for $\mathbb Z[D_2]$, $\mathbb Z[D_3]$.
Relative hyperbolicity, classifying spaces, and lower algebraic K-theory
Topology
math
2,626
41
We study the homotopy types of complements of arrangements of n transverse planes in R^4, obtaining a complete classification for n <= 6, and lower bounds for the number of homotopy types in general. Furthermore, we show that the homotopy type of a 2-arrangement in R^4 is not determined by the cohomology ring, thereby answering a question of Ziegler. The invariants that we use are derived from the characteristic varieties of the complement. The nature of these varieties illustrates the difference between real and complex arrangements.
Homotopy types of complements of 2-arrangements in R^4
Topology
math
2,626
41
We study phantom maps and homology theories in a stable homotopy category S via a certain Abelian category A. We express the group P(X,Y) of phantom maps X -> Y as an Ext group in A, and give conditions on X or Y which guarantee that it vanishes. We also determine P(X,HB). We show that any composite of two phantom maps is zero, and use this to reduce Margolis's axiomatisation conjecture to an extension problem. We show that a certain functor S -> A is the universal example of a homology theory with values in an AB 5 category and compare this with some results of Freyd.
Phantom Maps and Homology Theories
Topology
math
2,626
41
We construct modular categories from Hecke algebras at roots of unity. For a special choice of the framing parameter, we recover the Reshetikhin-Turaev invariants of closed 3-manifolds constructed from the quantum groups U_q sl(N) by Reshetikhin-Turaev and Turaev-Wenzl, and from skein theory by Yokota. We then discuss the choice of the framing parameter. This leads, for any rank N and level K, to a modular category \tilde H^{N,K} and a reduced invariant \tilde\tau_{N,K}. If N and K are coprime, then this invariant coincides with the known PSU(N) invariant at level K. If gcd(N,K)=d>1, then we show that the reduced invariant admits spin or cohomological refinements, with a nice decomposition formula which extends a theorem of H. Murakami.
Hecke algebras, modular categories and 3-manifolds quantum invariants
Topology
math
2,626
41
We develop a theory of tubular neighborhoods for the lower strata in manifold stratified spaces with two strata. In these topologically stratified spaces, manifold approximate fibrations and teardrops play the role that fibre bundles and mapping cylinders play in smoothly stratified spaces. Applications include the classification of neighborhood germs, the construction of exotic stratifications, a multiparameter isotopy extension theorem and an h-coborsism extension theorem.
Neighborhoods in Stratified Spaces with Two Strata
Topology
math
2,626
41
Operators on the ring of algebraically constructible functions are used to compute local obstructions for a four-dimensional semialgebraic set to be homeomorphic to a real algebraic set. The link operator and arithmetic operators yield $2^{43}-43$ independent characteristic numbers mod 2, which generalize the Akbulut-King numbers in dimension three.
Topology of real algebraic sets of dimension 4: necessary conditions
Topology
math
2,626
41