query
stringlengths
1
13.4k
pos
stringlengths
1
61k
neg
stringlengths
1
63.9k
query_lang
stringclasses
147 values
__index_level_0__
int64
0
3.11M
A note on the convergence analysis of a sparse grid multivariate probability density estimator
A note on the complexity of solving Poisson's equation for spaces of bounded mixed derivatives
On the Fundamental Limits of Recovering Tree Sparse Vectors From Noisy Linear Measurements
eng_Latn
700
Linearly constrained Bayesian matrix factorization for blind source separation
Unmixing of Hyperspectral Images using Bayesian Non-negative Matrix Factorization with Volume Prior
New exact solutions of Boiti-Leon-Manna-Pempinelli equation using extended F-expansion method
eng_Latn
701
An efficient algebraic multigrid preconditioned conjugate gradient solver
The Iterated Ritz Method : Basis , implementation and further development
Path collective variables without paths
eng_Latn
702
Lag weighted lasso for time series model
Oracle Properties and Finite Sample Inference of the Adaptive Lasso for Time Series Regression Models
Weighted Least-Squares Method for the Evaluation of Small-Angle X-ray Data Without Desmearing
eng_Latn
703
We describe a case of Gardnerella vaginalis colonization of the upper genital tract of the male partner of a woman with recurring bacterial vaginosis. G. vaginalis could not be cultured from the urethra but was cultured from semen. After treatment of the male partner with metronidazole, the woman had no more relapses of bacterial vaginosis.
Owing to the fact that there are more microbial than human cells in our body and that humans contain more microbial than human genes, the microbiome has huge potential to influence human physiology, both in health and in disease. The use of next-generation sequencing technologies has helped to elucidate functional, quantitative and mechanistic aspects of the complex microorganism–host interactions that underlie human physiology and pathophysiology. The microbiome of semen is a field of increasing scientific interest, although this microbial niche is currently understudied compared with other areas of microbiome research. However, emerging evidence is beginning to indicate that the seminal microbiome has important implications for the reproductive health of men, the health of the couple and even the health of offspring, owing to transfer of microorganisms to the partner and offspring. As this field expands, further carefully designed and well-powered studies are required to unravel the true nature and role of the seminal microbiome. Emerging evidence suggests that the seminal microbiome has implications for the reproductive health of men, their partner, and even the health of their offspring. In this Review, the authors describe the current evidence for a seminal microbiome and consider what the future holds for this field of research.
Location awareness, providing the ability to identify the location of sensor, machine, vehicle, and wearable device, is a rapidly growing trend of hyper-connected society and one of the key ingredients for the Internet of Things (IoT) era. In order to make a proper reaction to the collected information from things , location information of things should be available at the data center. One challenge for the IoT networks is to identify the location map of whole nodes from partially observed distance information. The aim of this paper is to present an algorithm to recover the Euclidean distance matrix (and eventually the location map) from partially observed distance information. By casting the low-rank matrix completion problem into the unconstrained minimization problem in a Riemannian manifold in which a notion of differentiability can be defined, we solve the low-rank matrix completion problem using a modified conjugate gradient algorithm. From the convergence analysis, we show that localization in Riemannian manifold using conjugate gradient (LRM-CG) converges linearly to the original Euclidean distance matrix under the extended Wolfe’s conditions. From the numerical experiments, we demonstrate that the proposed method, called LRM-CG, is effective in recovering the Euclidean distance matrix.
eng_Latn
704
We demonstrate the feasibility of constrained optimization approach for quantitative phase profiling. A high quality phase recovery from a near on axis hologram is made possible with the method.
Analysis of two-dimensional signals and systems foundation of scalar diffraction theory Fresnel and Fraunhofer diffraction wave-optics analysis of coherent optical systems frequency analysis of optical imaging systems wavefront modulation analog optical information processing holography. Appendices: delta function and Fourier transform theorems introduction to paraxial geometrical optics polarization and Jones matrices.
This chapter develops a theoretical analysis of the convex programming method for recovering a structured signal from independent random linear measurements. This technique delivers bounds for the sampling complexity that are similar to recent results for standard Gaussian measurements, but the argument applies to a much wider class of measurement ensembles. To demonstrate the power of this approach, the chapter presents a short analysis of phase retrieval by trace-norm minimization. The key technical tool is a framework, due to Mendelson and coauthors, for bounding a nonnegative empirical process.
eng_Latn
705
We present a new optimization procedure which is particularly suited for the solution of second-order kernel methods like e.g. Kernel-PCA. Common to these methods is that there is a cost function to be optimized, under a positive definite quadratic constraint, which bounds the solution. For example, in Kernel-PCA the constraint provides unit length and orthogonal (in feature space) principal components. The cost function is often quadratic which allows to solve the problem as a generalized eigenvalue problem. However, in contrast to Support Vector Machines, which employ box constraints, quadratic constraints usually do not lead to sparse solutions. Here we give up the structure of the generalized eigenvalue problem in favor of a non-quadratic regularization term added to the cost function, which enforces sparse solutions. To optimize this more `complicated' cost function, we introduce a modified conjugate gradient descent method. Starting from an admissible point, all iterations are carried out inside the subspace of admissible solutions, which is defined by the hyper-ellipsoidal constraint surface.
Note: Includes bibliographical references, 3 appendixes and 2 indexes.- Diskette v 2.06, 3.5''[1.44M] for IBM PC, PS/2 and compatibles [DOS] Reference Record created on 2004-09-07, modified on 2016-08-08
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
706
The electrocardiographic inverse problem of computing epicardial potentials from multi-electrode body-surface ECG measurements, is an ill-posed problem. Tikhonov regularization is commonly employed, which imposes penalty on the L2-norm of the potentials (zero-order) or their derivatives. Previous work has indicated superior results using L2-norm of the normal derivative of the solution (a first order regularization). However, L2-norm penalty function can cause considerable smoothing of the solution. Here, we use the L1-norm of the normal derivative of the potential as a penalty function. L1-norm solutions were compared to zero-order and first-order L2-norm Tikhonov solutions and to measured 'gold standards' in previous experiments with isolated canine hearts. Solutions with L1-norm penalty function (average relative error [RE] = 0.36) were more accurate than L2-norm (average RE = 0.62). In addition, the L1-norm method localized epicardial pacing sites with better accuracy (3.8 +/- 1.5 mm) compared to L2-norm (9.2 +/- 2.6 mm) during pacing in five pediatric patients with congenital heart disease. In a pediatric patient with Wolff-Parkinson-White syndrome, the L1-norm method also detected and localized two distinct areas of early activation around the mitral valve annulus, indicating the presence of two left-sided pathways which were not distinguished using L2 regularization.
Non-invasive reconstruction of infarcts inside the heart from ECG signals is an important and difficult problem due to the need to solve a severely ill-posed inverse problem. To overcome this ill-posedness, various sparse regularization techniques have been proposed and evaluated for detecting epicardial and transmural infarcts. However, the performance of sparse methods in detecting non-transmural, especially endocardial infarcts, is not fully explored. In this paper, we first show that the detection of non-transmural endocardial infarcts poses severe difficulty to the prevalent algorithms. Subsequently, we propose a novel sparse regularization technique based on a variational approximation of L0 norm. In a set of simulation experiments considering transmural and endocardial infarcts, we compare the presented method with total variation minimization and L1 norm based regularization techniques. Experiment results demonstrated that the presented method outperformed prevalent algorithms by a large margin, particularly when infarction is entirely on the endocardium.
We derive sharp performance bounds for least squares regression with L regularization from parameter estimation accuracy and feature selection quality perspectives. The main result proved for L 1 regularization extends a similar result in [Ann. Statist. 35 (2007) 2313-2351] for the Dantzig selector. It gives an affirmative answer to an open question in [Ann. Statist. 35 (2007) 2358-2364]. Moreover, the result leads to an extended view of feature selection that allows less restrictive conditions than some recent work. Based on the theoretical insights, a novel two-stage L 1 -regularization procedure with selective penalization is analyzed. It is shown that if the target parameter vector can be decomposed as the sum of a sparse parameter vector with large coefficients and another less sparse vector with relatively small coefficients, then the two-stage procedure can lead to improved performance.
eng_Latn
707
In this paper, we investigate the methodological issue of determining the number of state variables required for options pricing. After showing the inadequacy of the principal component analysis approach, which is commonly used in the literature, we adopt a nonparametric regression technique with nonlinear principal components extracted from the implied volatilities of various moneyness and maturities as proxies for the transformed state variables. The methodology is applied to the prices of S&P 500 index options from the period 1996--2005. We find that, in addition to the index value itself, two state variables, approximated by the first two nonlinear principal components, are adequate for pricing the index options and fitting the data in both time series and cross sections.
Principal component analysis is one of the most widely applied tools in order to summarize common patterns of variation among variables. Several studies have investigated the ability of individual methods, or compared the performance of a number of methods, in determining the number of components describing common variance of simulated data sets. We identify a number of shortcomings related to these studies and conduct an extensive simulation study where we compare a larger number of rules available and develop some new methods. In total we compare 20 stopping rules and propose a two-step approach that appears to be highly effective. First, a Bartlett's test is used to test the significance of the first principal component, indicating whether or not at least two variables share common variation in the entire data set. If significant, a number of different rules can be applied to estimate the number of non-trivial components to be retained. However, the relative merits of these methods depend on whether data contain strongly correlated or uncorrelated variables. We also estimate the number of non-trivial components for a number of field data sets so that we can evaluate the applicability of our conclusions based on simulated data.
AnO(n 3) mathematically non-iterative heuristic procedure that needs no artificial variable is presented for solving linear programming problems. An optimality test is included. Numerical experiments depict the utility/scope of such a procedure.
eng_Latn
708
We provide a full characterization of the oblique projector U ( VU ) † V in the general case where the range of U and the null space of V are not complementary subspaces. We discuss the new result in the context of constrained least squares minimization which finds many applications in engineering and statistics.
A singular or rectangular matrix does not have an inverse in the usual sense. Nevertheless a matrix having properties which are closely akin to those of an inverse may be defined for such matrices. This matrix, the pseudoinverse or generalized inverse, has hitherto been uniquely defined for any given matrix. In this paper the concept of the pseudoinverse is widened so as to admit, for any given matrix, of a class of pseudoinverses from which that member may be uniquely selected which has the most convenient properties for a particular application. The conventional pseudoinverse is included in the widened definition. Much of the analysis can be applied to bounded linear operators on Hilbert space.8
ABSTRACTUNC-45A is an ubiquitously expressed protein highly conserved throughout evolution. Most of what we currently know about UNC-45A pertains to its role as a regulator of the actomyosin system...
eng_Latn
709
This paper addresses the question of what exactly is an analogue of the preconditioned steepest descent (PSD) algorithm in the case of a symmetric indefinite system with an SPD preconditioner. We show that a basic PSD-like scheme for an SPD-preconditioned symmetric indefinite system is mathematically equivalent to the restarted PMINRES, where restarts occur after every two steps. A convergence bound is derived. If certain information on the spectrum of the preconditioned system is available, we present a simpler PSD-like algorithm that performs only one-dimensional residual minimization. Our primary goal is to bridge the theoretical gap between optimal (PMINRES) and PSD-like methods for solving symmetric indefinite systems, as well as point out situations where the PSD-like schemes can be used in practice.
A generalizeds-term truncated conjugate gradient method of least square type, proposed in [1a, b], is extended to a form more suitable for proving when the truncated version is identical to the full-term version. Advantages with keeping a control term in the truncated version is pointed out. A computationally efficient new algorithm, based on a special inner product with a small demand of storage is also presented.
Background ::: Only 10% of the up to 15% of patients with advanced Parkinson’s disease (PD) eligible for deep brain stimulation (DBS) are referred to specialized centers. This survey evaluated the reasons for the reluctance of patients and referring physicians regarding DBS.
eng_Latn
710
Symmetric stochastic matrices width a width a dominant eigenvalue λ and the corresponding eigenvector α appears in many applications. Such matrices can be written as M=λ α αt+E¯.Thus β = λ α will be the structure vector. When the matrices in such families correspond to the treatments of a base design we can carry out a ANOVA like analysis of the action of the treatments in the model on the structured vectors. This analysis can be transversal–when we worked width homologous components and - longitudinal when we consider contrast on the components of each structure vector.The analysis will be briefly considered at the end of our presentation.
This work was partially supported by the Fundacao para a Ciencia e a Tecnologia (Portuguese Foundation for Science and Technology) through the project UID/MAT/00297/2013 (Centro de Matematica e Aplicacoes).
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
711
In this paper it was investigated if any genotypic footprints from the fat mass and obesity associated (FTO) SNP could be found in 600 MHz 1H CPMG NMR profiles of around 1,000 human plasma samples from healthy Danish twins. The problem was addressed with a combination of univariate and multivariate methods. The NMR data was substantially compressed using principal component analysis or multivariate curve resolution-alternating least squares with focus on chemically meaningful feature selection reflecting the nature of chemical signals in an NMR spectrum. The possible existence of an FTO signature in the plasma samples was investigated at the subject level using supervised multivariate classification in the form of extended canonical variate analysis, classification tree modeling and Lasso (L1) regularized linear logistic regression model (GLMNET). Univariate hypothesis testing of peak intensities was used to explore the genotypic effect on the plasma at the population level. The multivariate classification approaches indicated poor discriminative power of the metabolic profiles whereas univariate hypothesis testing provided seven spectral regions with p < 0.05. Applying false discovery rate control, no reliable markers could be identified, which was confirmed by test set validation. We conclude that it is very unlikely that an FTO-correlated signal can be identified in these 1H CPMG NMR plasma metabolic profiles and speculate that high-throughput un-targeted genotype-metabolic correlations will in many cases be a difficult path to follow.
We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, two-class logistic regression, and multi- nomial regression problems while the penalties include l1 (the lasso), l2 (ridge regression) and mixtures of the two (the elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal efficiently with sparse features. In comparative timings we find that the new algorithms are considerably faster than competing methods.
We characterize stability under composition, inversion, and solution of ordinary differential equations for ultradifferentiable classes, and prove that all these stability properties are equivalent.
eng_Latn
712
Background ::: Recent high throughput technologies have been applied for collecting heterogeneous biomedical omics datasets. Computational analysis of the multi-omics datasets could potentially reveal deep insights for a given disease. Most existing clustering methods by multi-omics data assume strong consistency among different sources of datasets, and thus may lose efficacy when the consistency is relatively weak. Furthermore, they could not identify the conflicting parts for each view, which might be important in applications such as cancer subtype identification.
Multi-view clustering, which seeks a partition of the data in multiple views that often provide complementary information to each other, has received considerable attention in recent years. In real life clustering problems, the data in each view may have considerable noise. However, existing clustering methods blindly combine the information from multi-view data with possibly considerable noise, which often degrades their performance. In this paper, we propose a novel Markov chain method for Robust Multi-view Spectral Clustering (RMSC). Our method has a flavor of lowrank and sparse decomposition, where we firstly construct a transition probability matrix from each single view, and then use these matrices to recover a shared low-rank transition probability matrix as a crucial input to the standard Markov chain method for clustering. The optimization problem of RMSC has a low-rank constraint on the transition probability matrix, and simultaneously a probabilistic simplex constraint on each of its rows. To solve this challenging optimization problem, we propose an optimization procedure based on the Augmented Lagrangian Multiplier scheme. Experimental results on various real world datasets show that the proposed method has superior performance over several state-of-the-art methods for multi-view clustering.
Purpose of Review ::: Metabolomics offers several opportunities for advancement in nutritional cancer epidemiology; however, numerous research gaps and challenges remain. This narrative review summarizes current research, challenges, and future directions for epidemiologic studies of nutritional metabolomics and cancer.
eng_Latn
713
Data in social and behavioral sciences are often hierarchically organized though seldom normal. They typically contain heterogeneous marginal skewnesses and kurtoses. With such data, the normal theory based likelihood ratio statistic is not reliable when evaluating a multilevel structural equation model. Statistics that are not sensitive to sampling distributions are desirable. Six statistics for evaluating a structural equation model are extended from the conventional context to the multilevel context. These statistics are asymptotically distribution free, that is, their distributions do not depend on the sampling distribution when sample size at the highest level is large enough. The performance of these statistics in practical data analysis is evaluated with a Monte Carlo study simulating conditions encountered with real data. Results indicate that each of the statistics is very insensitive to the underlying sampling distributions even with finite sample sizes. However, the six statistics perform quite differently at smaller sample sizes; some over-reject the correct model and some under-reject the correct model. Comparing the six statistics with two existing ones in the multilevel context, two of the six new statistics are recommended for model evaluation in practice.
SUMMARY A fast Fisher scoring algorithm for maximum likelihood estimation in unbalanced mixed models with nested random effects is described. The algorithm uses explicit formulae for the inverse and the determinant of the covariance matrix, given by LaMotte (1972), and avoids inversion of large matrices. Description of the algorithm concentrates on computational aspects for large sets of data. Computational methods for maximum likelihood estimation in unbalanced variance component models were developed by Hemmerle & Hartley (1973) using the W-transfor- mation, and by Patterson & Thompson (1971); see also Thompson (1980). These methods were reviewed by Harville (1977) who also discussed a variety of applications for the variance component models. Computational problems may arise when the number of clusters or random coefficients is large because inversion of very large matrices is required, and so there are severe limitations on the size of practical problems that can be handled. Goldstein (1986) and Aitkin & Longford (1986) present arguments for routine use of variance component models in educational context, but their arguments are applicable for a much wider range of problems including social surveys, longitudinal data, repeated measurements or experiments and multivariate analysis. The formulation of the general EM algorithm by Dempster, Laird & Rubin (1977) has led to development of alternative computational algorithms for variance component analysis by Dempster, Rubin & Tsutakawa (1981), Mason, Wong & Entwisle (1984) and others. These algorithms avoid inversion of large matrices, but may be very slow on complex problems, a common feature of EM algorithms. Convergence is especially slow when the variance components are small. The present paper gives details of a Fisher scoring algorithm for the unbalanced nested random effects model which converges rapidly and does not require the inversion of large matrices. The algorithm exploits the formulae for the inverse and the determinant of the irregularly patterned covariance matrix of the observations given by LaMotte (1972). The analysis presented by Aitkin & Longford (1986) uses software based on this algorithm. For another example see Longford (1985).
In this short note we prove that if X is a separably rationally connected variety over an algebraically closed field of positive characteristic, then H^1(X, O_X)=0.
eng_Latn
714
We show a simple way how asymptotic convergence results can be conveyed from a simple Jacobi method to a block Jacobi method. Our pilot methods are the well known symmetric Jacobi method and the Paardekooper method for reducing a skew-symmetric matrix to the real Schur form. We show resemblance in the quadratic and cubic convergence estimates, but also discrepances in the asymptotic assumptions. By numerical tests we confirm that our asymptotic assumptions for the Paardekooper method are most general.
The classical Jacobi algorithm is extended to an unified Lie algebraic approach. The conventional Jacobi algorithm minimize the distance to diagonality; they reduce the off-norm, i. e. the sum of squares of off-diagonal entries. Sorting the diagonal elements after each step would accelerate the convergence but, there are difficulties to apply this sorting to the off-norm, that has to minimize. Using the gradient flow of a trace function [it R. W. Brockett, ``Dynamical systems that sort lists, diagonalize matrices, and solve linear programming problems", Linear Algebra Appl. 146, 79--91 (1991; Zbl 0719.90045)] as a more appropriate distance measure matrices can be diagonalized, and the eigenvalues can be simultaneously sorted. In this paper the sort-Jacobi algorithm is extended to a large class of structured matrices, besides the semisimple Lie algebra cases, such as e. g. the symmetric, Hermitian, and skew-symmetric eigenvalue problem, and the real and complex singular value decomposition. par The local quadratic convergence of the sort-Jacobi method is proved for the regular case but, not for the case in which the eigenvalues or singular values occur in clusters. For these more complicated irregular problem a future publication is announced. par The results are illustrated considering the Lie algebra of derivations of the complex octonions.
Background ::: With the trend toward pay-for-performance standards plus the increasing incidence and prevalence of periprosthetic joint infection (PJI), orthopaedic surgeons must reconsider all potential infection control measures. Both airborne and nonairborne bacterial contamination must be reduced in the operating room.
eng_Latn
715
This paper presents methods for the identification of noncausal two-dimensional rational systems from impulse response or output covariance data. Consideration of the general class of noncausal systems is motivated by the need for noncausal models in two-dimensional power spectrum estimation. The identification methods studied here rely on a recently proposed notion of system state for noncausal systems.
Publisher Summary Linear recursive processing is a practical solution to the main drawback of digital technology, which is its slowness for many real time signal processing applications. It seems reasonable to try natural generalizations of the various equivalent characterizations of linear recursive time invariant transformations: (1) the algebraic characterization as finite-rank linear operators and (2) the “behavioral” characterization as a transformation described by a finite-order autoregressive form relating the inputs and the outputs. The approach presented in this chapter is to generalize the algebraic characterization that turns to be very fertile and allows building a self-contained theory for statistical recursive processing for a large class of double-indexed sequences. The chapter introduces a model for double-indexed space invariant linear transformations along with a class of double-indexed Gaussian sequences built as “output” of such linear transformations “driven” by white noise. A realization theory is developed leading to an algebraic characterization of those transformations, an approximation property, and a minimal realization algorithm. . After an approximation theorem proving the generality of this class of sequences, the study of their correlation function leads to stochastic identification algorithms and throws some light on spectral factorization properties of double-indexed sequences. The chapter yields a recursive solution to filtering and smoothing problems involving Gaussian sequences.
We consider the one-dimensional symmetric simple exclusion process with nearest neighbor jumps and we prove estimates on the decay of the correlation functions at long times.
eng_Latn
716
We consider the problem of computing the nearest matrix polynomial with a non-trivial Smith Normal Form. We show that computing the Smith form of a matrix polynomial is amenable to numeric computation as an optimization problem. Furthermore, we describe an effective optimization technique to find a nearby matrix polynomial with a non-trivial Smith form. The results are then generalized to include the computation of a matrix polynomial having a maximum specified number of ones in the Smith Form (i.e., with a maximum specified McCoy rank). We discuss the geometry and existence of solutions and how our results can used for an error analysis. We develop an optimization-based approach and demonstrate an iterative numerical method for computing a nearby matrix polynomial with the desired spectral properties. We also describe an implementation of our algorithms and demonstrate the robustness with examples in Maple.
We develop a general framework for perturbation analysis of matrix polynomials. More specifically, we show that the normed linear space L m ( C n × n ) of n-by-n matrix polynomials of degree at most m provides a natural framework for perturbation analysis of matrix polynomials in L m ( C n × n ) . We present a family of natural norms on the space L m ( C n × n ) and show that the norms on the spaces C m + 1 and C n × n play a crucial role in the perturbation analysis of matrix polynomials. We define pseudospectra of matrix polynomials in the general framework of the normed space L m ( C n × n ) and show that the pseudospectra of matrix polynomials well known in the literature follow as special cases. We analyze various properties of pseudospectra in the unified framework of the normed space L m ( C n × n ) . We analyze critical points of backward errors of approximate eigenvalues of matrix polynomials and show that each critical point is a multiple eigenvalue of an appropriately perturbed polynomial. We show that common boundary points of components of pseudospectra of matrix polynomials are critical points. As a consequence, we show that a solution of Wilkinson’s problem for matrix polynomials can be read off from the pseudospectra of matrix polynomials.
The oxidative polymorphism of debrisoquine (DBQ) has been determined in 89 patients with colo-rectal cancer and in 556 normal control subjects. Four patients and 34 controls, with a metabolic ratio >12.6, were classified as poor metabolisers of DBQ (n.s.).
eng_Latn
717
Due to the ill-posedness of inverse problems, it is important to make use of most of the \textit{a priori} informations while solving such a problem. These informations are generally used as constraints to get the appropriate solution. In usual cases, constrains are turned into penalization of some characteristics of the solution. A common constraint is the regularity of the solution leading to regularization techniques for inverse problems. Regularization by penalization is affected by two principal problems: - as the cost function is composite, the convergence rate of minimization algorithms decreases - when adequate regularization functions are defined, one has to define weighting parameters between regularization functions and the objective function to minimize. It is very difficult to get optimal weighting parameters since they are strongly dependant on the observed data and the truth solution of the problem. There is a third problem that affects regularization based on the penalization of spatial variation. Although the penalization of spatial variation is known to give best results (gradient penalization and second order regularization), there is no physical underlying foundation. Penalization of spatial variations lead to smooth solution that is an equilibrium between good and bad characteristics. Here, we introduce a new approach for regularization of ill-posed inverse problems. Penalization of spatial variations is weighted by an observation based trust function. The result is a generalized diffusion operator that turns regularization into pseudo covariance operators. All the regularization informations are then embedded into a preconditioning operator. On one hand, this method do not need any extra terms in the cost function, and of course is affected neither by the ill-convergence due to composite cost function, nor by the choice of weighting parameters. On the other hand, The trust function introduced here allows to take into account the observation based a priori knowledges on the problem. We suggest a simple definition of the trust function for inverse problems in image processing. Preliminary results show a promising method for regularization of inverse problems.
Many formulations of visual reconstruction problems (e.g. optic flow, shape from shading, biomedical motion estimation from CT data) involve the recovery of a vector field. Often the solution is characterized via a generalized spline or regularization formulation using a smoothness constraint. This paper introduces a decomposition of the smoothness constraint into two parts: one related to the divergence of the vector field and one related to the curl or vorticity. This allows one to "tune" the smoothness to the properties of the data. One can, for example, use a high weighting on the smoothness imposed upon the curl in order to preserve the divergent parts of the field. For a particular spline within the family introduced by this decomposition process, we derive an exact solution and demonstrate the approach on examples. >
this failure of Europanization in these countries is correlated with the preferences of the ruling elites in these countries.
eng_Latn
718
Subspace identification methods (SIM) have gone through tremendous development over the last decade. The SIM algorithms are attractive not only because of its numerical simplicity and stability, but also for its state space form that is very convenient for estimation, filtering, prediction, and control. A few drawbacks, however, have been experienced with SIMs: 1. The estimation accuracy is in general not as good as the prediction error methods (PEM), represented by large variance. 2. The application of SIMs to closed-loop data is still a challenge, even though the data satisfy identifiability conditions for traditional methods such as PEMs. 3. The estimation of B and D is more problematic than that of A and C, which is reflected in the poor estimation of zeros and steady state gains. In this paper, we are concerned about the reasons why subspace identification approaches exhibit these drawbacks and propose parsimonious SIMs for open-loop and closed-loop applications. First of all, we start with the analysis of existing subspace formulation using the linear regression formulation. From this analysis we reveal that the typical SIM algorithms actually use non-parsimonious model formulation, with extra terms in the model that appear to be non-causal. These terms, although conveniently included for performing subspace projection, are the causes for inflated variance in the estimates and partially responsible for the loss of closed-loop identifiability. We propose two new subspace identification approaches that will remove these terms by enforcing triangular structure of the Toeplitz matrix Hf at every step of the SIM procedure. These approaches are referred to as parsimonious subspace identification methods (PARSIM) as they use parsimonious model formulation. The first PARSIM method involves a bank of least squares problems in parallel, denoted as parallel PARSIM (PARSIM-P). The second method involves sequential estimation of the bank of least squares problems, denoted as sequential PARSIM (PARSIM-S). Numerical simulations will be provided to support the analytical results.
In this paper we reveal that the typical subspace identification algorithms use non-parsimonious model formulations, with extra terms in the model that appear to be non-causal. These terms are the ...
In this paper we reveal that the typical subspace identification algorithms use non-parsimonious model formulations, with extra terms in the model that appear to be non-causal. These terms are the ...
eng_Latn
719
FETI-DP methods for the optimal control problems of linear elasticity problems are considered and numerical results are presented.
The Toolkit for Advanced Optimization (TAO) focuses on the design and implementation of component-based optimization software for the solution of large-scale optimization applications on high-performance architectures. Their approach is motivated by the scattered support for parallel computations and lack of reuse of linear algebra software in currently available optimization software. The TAO design allows the reuse of toolkits that provide lower-level support (parallel sparse matrix data structures, preconditioners, solvers), and thus they are able to build on top of these toolkits instead of having to redevelop code. The advantages in terms of efficiency and development time are significant. The TAO design philosophy uses object-oriented techniques of data and state encapsulation, abstract classes, and limited inheritance to create a flexible optimization toolkit. This chapter provides a short introduction to the design philosophy by describing the objectives in TAO and the importance of this design. Since a major concern in the TAO project is the performance and scalability of optimization algorithms on large problems, they also present some performance results.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
720
This paper addresses the problem of approximate singular value decomposition of large dense matrices that arises naturally in many machine learning applications. We discuss two recently introduced sampling-based spectral decomposition techniques: the Nystrom and the Column-sampling methods. We present a theoretical comparison between the two methods and provide novel insights regarding their suitability for various applications. We then provide experimental results motivated by this theory. Finally, we propose an efficient adaptive sampling technique to select informative columns from the original matrix. This novel technique outperforms standard sampling methods on a variety of datasets.
This paper examines the efficacy of sampling-based low-rank approximation techniques when applied to large dense kernel matrices. We analyze two common approximate singular value decomposition techniques, namely the Nystrom and Column sampling methods. We present a theoretical comparison between these two methods, provide novel insights regarding their suitability for various tasks and present experimental results that support our theory. Our results illustrate the relative strengths of each method. We next examine the performance of these two techniques on the large-scale task of extracting low-dimensional manifold structure given millions of high-dimensional face images. We address the computational challenges of non-linear dimensionality reduction via Isomap and Laplacian Eigenmaps, using a graph containing about 18 million nodes and 65 million edges. We present extensive experiments on learning low-dimensional embeddings for two large face data sets: CMU-PIE (35 thousand faces) and a web data set (18 million faces). Our comparisons show that the Nystrom approximation is superior to the Column sampling method for this task. Furthermore, approximate Isomap tends to perform better than Laplacian Eigenmaps on both clustering and classification with the labeled CMU-PIE data set.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
721
We propose optimal dimensionality reduction techniques for the solution of goal-oriented linear-Gaussian inverse problems, where the quantity of interest (QoI) is a function of the inversion parameters. These approximations are suitable for large-scale applications. In particular, we study the approximation of the posterior covariance of the QoI as a low-rank negative update of its prior covariance, and prove optimality of this update with respect to the natural geodesic distance on the manifold of symmetric positive definite matrices. Assuming exact knowledge of the posterior mean of the QoI, the optimality results extend to optimality in distribution with respect to the Kullback-Leibler divergence and the Hellinger distance between the associated distributions. We also propose approximation of the posterior mean of the QoI as a low-rank linear function of the data, and prove optimality of this approximation with respect to a weighted Bayes risk. Both of these optimal approximations avoid the explicit computation of the full posterior distribution of the parameters and instead focus on directions that are well informed by the data and relevant to the QoI. These directions stem from a balance among all the components of the goal-oriented inverse problem: prior information, forward model, measurement noise, and ultimate goals. We illustrate the theory using a high-dimensional inverse problem in heat transfer.
The standard formulations of the Kalman filter (KF) and exten ded Kalman filter (EKF) require the storage and multiplication of matrices of size , where is the size of the state space, and the inversion of matrices of size , where is the size of the observation space. Thus when both and are large, implementation issues arise. In this paper, we advocate the use of the limited memory BFGS method (LBFGS) to address these issues. A detailed description of how to use LBFGS within both the KF and EKF methods is given. The methodology is then tested on two examples: the first is la rge-scale and linear, and the second is small scale and nonlinear. Our results indicate that the resulting methods , which we will denote LBFGS-KF and LBFGS-EKF, yield results that are comparable with those obtained using KF and EKF, respectively, and can be used on much larger scale problems.
We prove that groups acting geometrically on delta-quasiconvex spaces contain no essential Baumslag-Solitar quotients as subgroups. This implies that they are translation discrete, meaning that the translation numbers of their nontorsion elements are bounded away from zero.
eng_Latn
722
Multiplicative updates are widely used for nonnegative matrix factorization (NMF) as an efficient computational method. In this paper, we consider a class of constrained optimization problems in which a polynomial function of the product of two matrices is minimized subject to the nonnegativity constraints. These problems are closely related to NMF because the polynomial function covers many error function used for NMF. We first derive a multiplicative update rule for those problems by using the unified method developed by Yang and Oja. We next prove that a modified version of the update rule has the global convergence property in the sense of Zangwill under certain conditions. This result can be applied to many existing multiplicative update rules for NMF to guarantee their global convergence.
For the purpose of selecting the best sites for installation of wind farms and for increasing the net yield of wind energy, the wind speed is required to be determined at different positions, within a domain of interest. This helps to determine the natural variance/uncertainty in the wind speed, which is very useful for predicting the wind power potential in an area. For this, a huge amount of data is required to be processed (from wind speed sensor measurements) and mathematical algorithms are required for rapid reconstruction of the wind field. The Non-negative Matrix Factorization (NMF) and Principal Component Analysis (PCA) have been presented, which can be applied for reconstructing the wind field around obstruction models using CFD basis. The absolute reconstruction error tends to increase with an increase in the inlet velocity. The relative accuracy of NMF and PCA are subject to the sampling rate of the measurement, but there is no influence on the distribution of the wind speed sensors around the obstructions model (above a sampling rate of 0.05%). By application of these reconstruction models using WSR, it has been concluded that the NMF and PCA can be adequately used to reconstruct the wind field around an obstruction model.
Non-negative matrix factorization (NMF) is a recently developed technique for finding parts-based, linear representations of non-negative data. Although it has successfully been applied in several applications, it does not always result in parts-based representations. In this paper, we show how explicitly incorporating the notion of `sparseness' improves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF and for our extension. Our hope is that this will further the application of these methods to solving novel data-analysis problems.
eng_Latn
723
Outlier identification is important in many applications of multivariate analysis. Either because there is some specific interest in finding anomalous observations or as a pre-processing task before the application of some multivariate method, in order to preserve the results from possible harmful effects of those observations. It is also of great interest in discriminant analysis if, when predicting group membership, one wants to have the possibility of labelling an observation as ”does not belong to any of the available groups”. The identification of outliers in multivariate data is usually based on Mahalanobis distance. The use of robust estimates of the mean and the covariance matrix is advised in order to avoid the masking effect (Rousseeuw and von Zomeren, 1990; Rocke and Woodruff, 1996; Becker and Gather, 1999). However, the performance of these rules is still highly dependent of multivariate normality of the bulk of the data. The aim of the method here described is to remove this dependency. The first version of this method appeared in Santos-Pereira and Pires (2002). In this talk we discuss some refinements and also the relation with a recently proposed similar method (Hardin and Rocke, 2004).
Outlier identification is important in many applications of multivariate analysis. Either because there is some specific interest in finding anomalous observations or as a pre-processing task before the application of some multivariate method, in order to preserve the results from possible harmful effects of those observations. It is also of great interest in supervised classification (or discriminant analysis) if, when predicting group membership, one wants to have the possibility of labelling an observation as “does not belong to any of the available groups”. The identification of outliers in multivariate data is usually based on Mahalanobis distance. The use of robust estimates of the mean and the covariance matrix is advised in order to avoid the masking effect (Rousseeuw and Leroy, 1985; Rousseeuw and von Zomeren, 1990; Rocke and Woodruff, 1996; Becker and Gather, 1999). However, the performance of these rules is still highly dependent of multivariate normality of the bulk of the data. The aim of the method here described is to remove this dependence.
Outlier identification is important in many applications of multivariate analysis. Either because there is some specific interest in finding anomalous observations or as a pre-processing task before the application of some multivariate method, in order to preserve the results from possible harmful effects of those observations. It is also of great interest in supervised classification (or discriminant analysis) if, when predicting group membership, one wants to have the possibility of labelling an observation as “does not belong to any of the available groups”. The identification of outliers in multivariate data is usually based on Mahalanobis distance. The use of robust estimates of the mean and the covariance matrix is advised in order to avoid the masking effect (Rousseeuw and Leroy, 1985; Rousseeuw and von Zomeren, 1990; Rocke and Woodruff, 1996; Becker and Gather, 1999). However, the performance of these rules is still highly dependent of multivariate normality of the bulk of the data. The aim of the method here described is to remove this dependence.
eng_Latn
724
The left and right inverse eigenvalue problem, which mainly arises in perturbation analysis of matrix eigenvalue and recursive matters, has some practical applications in engineer and scientific computation fields. In this paper, we give the solvability conditions of and the general expressions to the left and right inverse eigenvalue problem for the (R,S)-symmetric and (R,S)-skew symmetric solutions. The corresponding best approximation problems for the left and right inverse eigenvalue problem are also solved. That is, given an arbitrary complex n-by-n matrix A~, find a (R,S)-symmetric (or (R,S)-skew symmetric) matrix A"A"~ which is the solution to the left and right inverse eigenvalue problem such that the distance between A~ and A"A"~ is minimized in the Frobenius norm. We give an explicit solution to the best approximation problem in the (R,S)-symmetric and (R,S)-skew symmetric solution sets of the left and right inverse eigenvalue problem under the assumption that R=R^* and S=S^*. A numerical example is given to illustrate the effectiveness of our method.
Let P, Q ∈ Cn×n be two normal {k+1}-potent matrices, i.e., PP � = PP, P k+1 = P, QQ � = QQ, Q k+1 = Q, k ∈ N. A matrix A ∈ C n×n is referred to as generalized reflexive with two normal {k + 1}-potent matrices P and Q if and only if A = PAQ. The set of all n × n generalized reflexive matrices which rely on the matrices P and Q is denoted by GR n×n (P,Q). The left and right inverse eigenproblem of such matrices ask from us to find a matrix A ∈ GR n×n (P,Q) containing a given part of left and right eigenvalues and corresponding left and right eigenvectors. In this paper, first necessary and sufficient conditions such t hat the problem is solvable are obtained. A general representation of the solution is presented. Then an expression of the solution for the optimal Frobenius norm approximation problem is exploited. A stability analysis of the optimal approximate solution, which has scarcely been considered in existing literature, is also developed.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
725
In this paper, we analyze an algorithm to compute a low-rank approximation of the similarity matrix S introduced by Blondel et al. in [1]. This problem can be reformulated as an optimization problem of a continuous function Φ(S)=tr(STM2(S)) where S is constrained to have unit Frobenius norm, and M2 is a non-negative linear map. We restrict the feasible set to the set of matrices of unit Frobenius norm with either k nonzero identical singular values or at most k nonzero (not necessarily identical) singular values. We first characterize the stationary points of the associated optimization problems and further consider iterative algorithms to find one of them. We analyze the convergence properties of our algorithm and prove that accumulation points are stationary points of Φ(S). We finally compare our method in terms of speed and accuracy to the full rank algorithm proposed in [1].
In this paper, we go over a number of optimization problems defined on a manifold in order to compare two matrices, possibly of different order. We consider several variants and show how these problems relate to various specific problems from the literature.
We prove that groups acting geometrically on delta-quasiconvex spaces contain no essential Baumslag-Solitar quotients as subgroups. This implies that they are translation discrete, meaning that the translation numbers of their nontorsion elements are bounded away from zero.
eng_Latn
726
We survey some computationally efficient formulas to estimate the number of integer or 0-1 points in polytopes. In many interesting cases, the formulas are asymptotically exact when the dimension of the polytopes grows. The polytopes are defined as the intersection of the non-negative orthant or the unit cube with an affine subspace, while the main ingredient of the formulas comes from solving a convex optimization problem on the polytope.
We present three different upper bounds for Kronecker coefficients $g(\lambda,\mu,\nu)$ in terms of Kostka numbers, contingency tables and Littlewood--Richardson coefficients. We then give various examples, asymptotic applications, and compare them with existing lower bounds.
The oxidative polymorphism of debrisoquine (DBQ) has been determined in 89 patients with colo-rectal cancer and in 556 normal control subjects. Four patients and 34 controls, with a metabolic ratio >12.6, were classified as poor metabolisers of DBQ (n.s.).
eng_Latn
727
This article investigates a new procedure to estimate the influence of each variable of a given function defined on a high-dimensional space. More precisely, we are concerned with describing a function of a large number $p$ of parameters that depends only on a small number $s$ of them. Our proposed method is an unconstrained $\ell_{1}$-minimization based on the Sobol's method. We prove that, with only $\mathcal O(s\log p)$ evaluations of $f$, one can find which are the relevant parameters.
We derive the l! convergence rate simultaneously for Lasso and Dantzig estimators in a high-dimensional linear regression model under a mutual coherence assumption on the Gram matrix of the design and two di! erent assumptions on the noise: Gaussian noise and general noise with finite variance. Then we prove that simultaneously the thresholded Lasso and Dantzig estimators with a proper choice of the threshold enjoy a sign concentration property provided that the non-zero components of the target vector are not too small.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
728
The error analysis for computing the QR decomposition by Givens transformations was given originally by Wilkinson for n × n square matrices, and later by Gentleman for n × p ( p ⩽ n ) tall thin matrices. The derivations were sufficiently messy that results were quoted by analogy to the derivation of a specific case. A certain lemma makes possible a much simpler derivation, which incidentally substantially tightens the bound. Moreover, it applies to variants of the method other than those originally considered, and suggests why observed errors are even less than this new bound.
Given an n x p matrix X with p < n, matrix triangularization, or triangularization in short, is to determine an n x n nonsingular matrix Al such that MX = [ R 0 where R is p x p upper triangular, and furthermore to compute the entries in R. By triangularization, many matrix problems are reduced to the simpler problem of solving triangular- linear systems (see for example, Stewart). When X is a square matrix, triangularization is the major step in almost all direct methods for solving general linear systems. When M is restricted to be an orthogonal matrix Q, triangularization is also the key step in computing least squares solutions by the QR decomposition, and in computing eigenvalues by the QR algorithm. Triangularization is computationally expensive, however. Algorithms for performing it typically require n3 operations on general n x n matrices. As a result, triangularization has become a bottleneck in some real-time applications.11 This paper sketches unified concepts of using systolic arrays to perform real-time triangularization for both general and band matrices. (Examples and general discussions of systolic architectures can be found in other papers.6.7) Under the same framework systolic triangularization arrays arc derived for the solution of linear systems with pivoting and for least squares computations. More detailed descriptions of the suggested systolic arrays will appear in the final version of the paper.© (1982) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
The oxidative polymorphism of debrisoquine (DBQ) has been determined in 89 patients with colo-rectal cancer and in 556 normal control subjects. Four patients and 34 controls, with a metabolic ratio >12.6, were classified as poor metabolisers of DBQ (n.s.).
eng_Latn
729
Functional linear models are useful in longitudinal data analysis. They include many classical and recently proposed statistical models for longitudinal data and other functional data. Recently, smoothing spline and kernel methods have been proposed for estimating their coefficient functions nonparametrically but these methods are either intensive in computation or inefficient in performance. To overcome these drawbacks, in this paper, a simple and powerful two-step alternative is proposed. In particular, the implementation of the proposed approach via local polynomial smoothing is discussed. Methods for estimating standard deviations of estimated coefficient functions are also proposed. Some asymptotic results for the local polynomial estimators are established. Two longitudinal data sets, one of which involves time-dependent covariates, are used to demonstrate the approach proposed. Simulation studies show that our two-step approach improves the kernel method proposed by Hoover and co-workers in several aspects such as accuracy, computational time and visual appeal of the estimators.
The varying-coefficient model is flexible and powerful for modeling the dynamic changes of regression coefficients. It is important to identify significant covariates associated with response variables, especially for high-dimensional settings where the number of covariates can be larger than the sample size. We consider model selection in the high-dimensional setting and adopt difference convex programming to approximate the L0 penalty, and we investigate the global optimality properties of the varying-coefficient estimator. The challenge of the variable selection problem here is that the dimension of the nonparametric form for the varying-coefficient modeling could be infinite, in addition to dealing with the high-dimensional linear covariates. We show that the proposed varying-coefficient estimator is consistent, enjoys the oracle property and achieves an optimal convergence rate for the non-zero nonparametric components for high-dimensional data. Our simulations and numerical examples indicate that the difference convex algorithm is efficient using the coordinate decent algorithm, and is able to select the true model at a higher frequency than the least absolute shrinkage and selection operator (LASSO), the adaptive LASSO and the smoothly clipped absolute deviation (SCAD) approaches.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
730
Hidden Markov Model (HMM) has already been used to classify EEG signals in the field of Brain Computer Interfaces (BCIs). In many conventional methods, the Expectation-Maximization (EM) algorithm is used to estimate the HMM parameters for EEG classification. The EM algorithm is an iterative method for finding Maximum Likelihood (ML) or Maximum A Posteriori (MAP) estimates of parameters in statistical models. However, it can be easily trapped into a shallow local optimum. Recently, large margin HMMs is used to obtain the HMM parameters based on the principle of maximizing the minimum margin and it has been applied successfully in speech recognition. Inspired by it, we propose to use the large margin HMMs method in classification of EEG signals about motor imagery by establishing HMMs for different types of signals. Experimental results demonstrate that HMM parameters estimation via the new method can significantly improve the accuracy of motor imagery classification.
Convex optimization methods are widely used in the design and analysis of communication systems and signal processing algorithms. This tutorial surveys some of recent progress in this area. The tutorial contains two parts. The first part gives a survey of basic concepts and main techniques in convex optimization. Special emphasis is placed on a class of conic optimization problems, including second-order cone programming and semidefinite programming. The second half of the survey gives several examples of the application of conic programming to communication problems. We give an interpretation of Lagrangian duality in a multiuser multi-antenna communication problem; we illustrate the role of semidefinite relaxation in multiuser detection problems; we review methods to formulate robust optimization problems via second-order cone programming techniques
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
731
The computation of penalized quantile regression estimates is often computationally intensive in high dimensions. In this paper we propose a coordinate descent algorithm for computing the penalized smooth quantile regression (cdaSQR) with convex and nonconvex penalties. The cdaSQR approach is based on the approximation of the objective check function, which is not differentiable at zero, by a modified check function which is differentiable at zero. Then, using the maximization-minimization trick of the gcdnet algorithm (Yang and Zou in, J Comput Graph Stat 22(2):396---415, 2013), we update each coefficient simply and efficiently. In our implementation, we consider the convex penalties $$\ell _1+\ell _2$$l1+l2 and the nonconvex penalties SCAD (or MCP) $$+ \ell _2$$+l2. We establishe the convergence property of the csdSQR with $$\ell _1+\ell _2$$l1+l2 penalty. The numerical results show that our implementation is an order of magnitude faster than its competitors. Using simulations we compare the speed of our algorithm to its competitors. Finally, the performance of our algorithm is illustrated on three real data sets from diabetes, leukemia and Bardet---Bidel syndrome gene expression studies.
This paper considers robust modeling of the survival time for cancer patients. Accurate prediction can be helpful for developing therapeutic and care strategies. We propose a unified Expectation-Maximization approach combined with the L1-norm penalty to perform variable selection and obtain parameter estimation simultaneously for the accelerated failure time model with right-censored survival data. Our approach can be used with general loss functions, and reduces to the well-known Buckley-James method when the squared-error loss is used without regularization. To mitigate the effects of outliers and heavy-tailed noise in the real application, we advocate the use of robust loss functions under our proposed framework. Simulation studies are conducted to evaluate the performance of the proposed approach with different loss functions, and an application to an ovarian carcinoma study is provided. Meanwhile, we extend our approach by incorporating the group structure of covariates.
We prove that groups acting geometrically on delta-quasiconvex spaces contain no essential Baumslag-Solitar quotients as subgroups. This implies that they are translation discrete, meaning that the translation numbers of their nontorsion elements are bounded away from zero.
eng_Latn
732
Consider the task of recovering an unknown $n$-vector from phaseless linear measurements. This task is the phase retrieval problem. Through the technique of lifting, this nonconvex problem may be convexified into a semidefinite rank-one matrix recovery problem, known as PhaseLift. Under a linear number of exact Gaussian measurements, PhaseLift recovers the unknown vector exactly with high probability. Under noisy measurements, the solution to a variant of PhaseLift has error proportional to the $\ell_1$ norm of the noise. In the present paper, we study the robustness of this variant of PhaseLift to a case with noise and gross, arbitrary corruptions. We prove that PhaseLift can tolerate a small, fixed fraction of gross errors, even in the highly underdetermined regime where there are only $O(n)$ measurements. The lifted phase retrieval problem can be viewed as a rank-one robust Principal Component Analysis (PCA) problem under generic rank-one measurements. From this perspective, the proposed convex program is simpler that the semidefinite version of the sparse-plus-low-rank formulation standard in the robust PCA literature. Specifically, the rank penalization through a trace term is unnecessary, and the resulting optimization program has no parameters that need to be chosen. The present work also achieves the information theoretically optimal scaling of $O(n)$ measurements without the additional logarithmic factors that appear in existing general robust PCA results.
Recent work has demonstrated the effectiveness of gradient descent for directly estimating high-dimensional signals via nonconvex optimization in a globally convergent manner using a proper initialization. However, the performance is highly sensitive in the presence of adversarial outliers that may take arbitrary values. In this chapter, we introduce the median-Truncated Gradient Descent (median-TGD) algorithm to improve the robustness of gradient descent against outliers, and apply it to two celebrated problems: low-rank matrix recovery and phase retrieval. Median-TGD truncates the contributions of samples that deviate significantly from the sample median in each iteration in order to stabilize the search direction. Encouragingly, when initialized in a neighborhood of the ground truth known as the basin of attraction, median-TGD converges to the ground truth at a linear rate under Gaussian designs with a near-optimal number of measurements, even when a constant fraction of the measurements are arbitrarily corrupted. In addition, we introduce a new median-truncated spectral method that ensures an initialization in the basin of attraction. The stability against additional dense bounded noise is also established. Numerical experiments are provided to validate the superior performance of median-TGD.
Holography has demonstrated potential to achieve a wide field of view, focus supporting, optical see-through augmented reality display in an eyeglasses form factor. Although phase modulating spatial light modulators are becoming available, the phase-only hologram generation algorithms are still imprecise resulting in severe artifacts in the reconstructed imagery. Since the holographic phase retrieval problem is non-linear and non-convex and computationally expensive with the solutions being non-unique, the existing methods make several assumptions to make the phase-only hologram computation tractable. In this work, we deviate from any such approximations and solve the holographic phase retrieval problem as a quadratic problem using complex Wirtinger gradients and standard first-order optimization methods. Our approach results in high-quality phase hologram generation with at least an order of magnitude improvement over existing state-of-the-art approaches.
eng_Latn
733
Searching for structural similarities of proteins has a central role in bioinformatics. Most tasks of bioinformatics depends on investigating the homologous protein's sequence or structure these tasks vary from predicting the protein structure to determine sites in protein structure where drug can be attached. Protein structure comparison problem is extremely important in many tasks. It can be used for determining function of protein, for clustering a given set of proteins by their structure, for assessment in protein fold prediction. Protein Structure Indexing using Suffix Array and Wavelet (PSISAW) is a hybrid approach that provides the ability to retrieve similarities of proteins based on their structures. Indexing the protein structure is one approach of searching for protein similarities. The suffix arrays are used to index protein structure and the wavelet is used to compress the indexed database. Compressing the indexed database is supposed to make the searching time faster and memory usage lower but it affects the accuracy with accepted rate of error.The experimental results, which are based on the structural classification of proteins (SCOP) dataset, show that the proposed approach outperforms existing similar techniques in memory utilization and searching speed. The results show an enhancement in the memory usage with factor 50%.
Two point sets {pi} and {p'i}; i = 1, 2,..., N are related by p'i = Rpi + T + Ni, where R is a rotation matrix, T a translation vector, and Ni a noise vector. Given {pi} and {p'i}, we present an algorithm for finding the least-squares solution of R and T, which is based on the singular value decomposition (SVD) of a 3 × 3 matrix. This new algorithm is compared to two earlier algorithms with respect to computer time requirements.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
734
For a nonnegative irreducible matrix A with spectral radius ϱ , this paper is concerned with the determination of the unique normalized Perron vector π which satisfies A π = ϱπ, π #&62; 0, Σ j π j = 1 . It is explained how to uncouple a large matrix A into two or more smaller matrices—say P 11 , P 22 ,…, P kk — such that this sequence of smaller matrices has the following properties: (1) Each P ii is also nonnegative and irreducible, so that each P ii has a unique Perron vector π ( i ) . (2) Each P ii has the same spectral radius ϱ as A . (3) It is possible to determine the π ( i ) 's completely independently of each other, so that one can execute the computation of the π ( i ) ' s parallel. (4) It is easy to couple the smaller Perron vectors π ( i ) back together in order to produce the Perron vector π for the original matrix A .
For a nonnegative irreducible matrix A, this paper is concerned with the estimation and determination of the unique Perron root or spectral radius of A. We present a new method that utilizes the relation between Perron roots of the nonnegative matrix and its (generalized) Perron complement. Several numerical examples are given to show that our method is effective, at least, for some classes of nonnegative matrices.
For a nonnegative irreducible matrix A, this paper is concerned with the estimation and determination of the unique Perron root or spectral radius of A. We present a new method that utilizes the relation between Perron roots of the nonnegative matrix and its (generalized) Perron complement. Several numerical examples are given to show that our method is effective, at least, for some classes of nonnegative matrices.
eng_Latn
735
Gradient Descent for Gaussian Processes Variance Reduction
A key issue in Gaussian Process modeling is to decide on the locations where measurements are going to be taken. A good set of observations will provide a better model. Current state of the art selects such a set so as to minimize the posterior variance of the Gaussian Process by exploiting submodularity. We propose a Gradient Descent procedure to iteratively improve an initial set of observations so as to minimize the posterior variance directly. The performance of the technique is analyzed under different conditions by varying the number of measurement points, the dimensionality of the domain and the hyperparameters of the Gaussian Process. Results show the applicability of the technique and the clear improvements that can be obtain under different settings.
Given ?>0 andp?(0,1), we consider the following problem: findu such that $$\begin{gathered} - \Delta u + \lambda [u]_ + ^p = 0in\Omega , \hfill \\ u = 1on\partial \Omega , \hfill \\ \end{gathered} $$ whereΩ??2 is a smooth convex domain. We prove optimalH 1 andL ? error bounds for the standard continuous piecewise linear Galerkin finite element approximation. In addition we analyse a more practical approximation using numerical integration on the nonlinear term. Finally we consider a modified nonlinear SOR algorithm, which is shown to be globally convergent, for solving the algebraic system derived from the more practical approximation.
eng_Latn
736
Initial Study of the Theory of 3G-GDP
With the continuous development of society,population,resources and environmental problems have become increasingly prominent,and people have paid more attention to their welfare level changes.The traditional GDP accounting theory,because of its inherent defects,can not fully reflect the environmental costs of development,quality of life and government management level.Therefore,this article proposed the concept of 3G-GDP national economic accounting,revealing the theoretical basis and content of 3G-GDP national economic accounting from the aspects of Green GDP,Gladness GDP and Government GDP,and analyzed the feasibility of 3G-GDP national economic accounting within the framework of the SNA account system.At last,this article explored the prospects for the application of 3G-GDP and concluded that 3G-GDP index can be used as an alternative or extensive index of GDP.
Surface reconstruction from point cloud is of great practical importance in computer graphics. Existing methods often realize reconstruction via a few phases with respective goals, whose integration may not give an optimal solution. In this paper, to avoid the inherent limitations of multi-phase processing in the prior art, we propose a unified framework that treats geometry and connectivity construction as one joint optimization problem. The framework is based on dictionary learning in which the dictionary consists of the vertices of the reconstructed triangular mesh and the sparse coding matrix encodes the connectivity of the mesh. The dictionary learning is formulated as a constrained e2,q-optimization (0
kor_Hang
737
Ordinal Scaling for Clinical Scores with Inconsistent Intervals (900 Patients)
Clinical studies often have categories as outcome, like various levels of health or disease. Multinomial regression is suitable for analysis (see Chap. 28). However, if one or two outcome categories in a study are severely underpresented, multinomial regression is flawed, and ordinal regression including specific link functions may provide a better fit for the data.
This paper considers the problem of tuning natural frequencies of a linear system by a memoryless controller. Using algebro-geometric methods it is shown how it is possible to improve current sufficiency conditions.The main result is an exact combinatorial characterization of the nilpotency index of the $\bmod 2$ cohomology ring of the real Grassmannian. Using this characterization, new sufficiency results for generic pole assignment for the linear system with m-inputs, p-outputs, and McMillan degree n are given. Among other results it is shown that \[2.25 \cdot \max (m,p) + \min (m,p) - 3 \geq n\] is a sufficient condition for generic real pole placement, provided $\min (m,p) \geq 4$.
eng_Latn
738
Is verumontanum resection needed in transurethral resection of the prostate
Transurethral resection of the prostate is the mainstay for treatment of bladder outflow obstruction. It is a procedure that involves various complications and has a high success rate. In view of a recent publication presenting the effect of verumontanum resection on functional outcome and possible complications after TURP, the present manuscript presents the available evidence on the subject as well as the possible criticism about the technique suggested by the authors. The results available do not confirm that by resecting the verumontanum there is a clinically significant improve ment in the functional outcome, however confirm that continence is not affected. The criticism probably lies in the fact that resecting such a small amount of tissue like the verumontanum (its size probably remains the same with few changes during lifetime) probably does not affect outcome, yet the resection of hyperplastic apical tissue around it may play a role in functional improvement.
In the present paper, a future cone in the Minkowski space defined in terms of the square-norm of the residual vector for an ill-posed linear system to be solved, is used to derive a nonlinear system of ordinary differential equations. Then the forward Euler scheme is used to generate an iterative algorithm. Two critical values in the critical descent tri-vector are derived, which lead to the largest convergence rate of the resultant iterative algorithm, namely the globally optimal tri-vector method (GOTVM). Some numerical examples are used to reveal the superior performance of the GOTVM than the famous methods of conjugate gradient (CGM) and generalized minimal residual (GMRES). Through the numerical tests we also set forth the rationale by assuming the tri-vector as being a better descent direction.
eng_Latn
739
Q p spaces in strictly pseudoconvex domainsspaces in strictly pseudoconvex domains
We extend the definition ofQ p spaces from the unit disk to a strictly pseudoconvex domainD in ℂ n and show that several known properties are true also in the several variable case. We also provide some proofs and examples that are new even when restricted to the one-dimensional case.
This paper presents solutions to the entropy-constrained scalar quantizer (ECSQ) design problem for two sources commonly encountered in image and speech compression applications: sources having exponential and Laplacian probability density functions. We obtain the optimal ECSQ either with or without an additional constraint on the number of levels in the quantizer. In contrast to prior methods, which require iterative solution of a large number of nonlinear equations, the new method needs only a single sequence of solutions to one-dimensional nonlinear equations (in some Laplacian cases, one additional two-dimensional solution is needed). As a result, the new method is orders of magnitude faster than prior ones. We also show that as the constraint on the number of levels in the quantizer is relaxed, the op timal ECSQ becomes a uniform threshold quantizer (UTQ) for exponential, but not for Laplacian sources.
eng_Latn
740
Anxiety Sensitivity Among Anxious Children
Employed the Diagnostic Interview Schedule for Children to show that children diagnosed with an anxiety disorder score significantly higher on the Childhood Anxiety Sensitivity Index (CASI) than nondiagnosed children. Interviews and self-report measures regarding the child were completed by 201 children and their parents from a metropolitan area military community who were participating in a National Institute of Mental Health epidemiological survey. An analysis of variance was used to compare CASI scoring across three groups: children receiving anxiety diagnoses, children receiving externalizing diagnoses but no anxiety diagnosis, and children receiving no diagnoses. Although scoring on the CASI differentiated anxious children from the no-diagnosis control group, it did not differentiate anxious children from those receiving externalizing diagnoses. Implications of the findings for the validity of the CASI, the issue of anxiety sensitivity as a component of some externalizing disorders, and suggestions f...
Matrix concentration inequalities have attracted much attention in diverse applications such as linear algebra, statistical estimation, combinatorial optimization, etc. In this paper, we present new Bernstein concentration inequalities depending only on the first moments of random matrices, whereas previous Bernstein inequalities are heavily relevant to the first and second moments. Based on those results, we analyze the empirical risk minimization in the presence of label noise. We find that many popular losses used in risk minimization can be decomposed into two parts, where the first part won't be affected and only the second part will be affected by noisy labels. We show that the influence of noisy labels on the second part can be reduced by our proposed LICS (Labeled Instance Centroid Smoothing) approach. The effectiveness of the LICS algorithm is justified both theoretically and empirically.
eng_Latn
741
On the Extension Value Calculation & Influence Factors of Prestressed Reinforcing Steel of Post-tensioning Method Construction
The post-tensioning method is analyzed briefly in this paper focusing on the construction risk by different construction sequence of prestressed reinforcing steel.And two methods to extension theoretical value of prestressed reinforcing steel are introduced according to the standard.Then,the practical extension calculation is presented in detail involving extension practical value and the deducted value.Finally,the influence of lifting jack installation on tensioning force is discussed and the effect analysis of one end tensioning and the other end supplementary tensioning is mentioned.
Automated understanding of spatio-temporal usage patterns of real-world applications are significant in urban planning. With the capability of smartphones collecting various information using inbuilt sensors, the smart city data is enriched with multiple contexts. Whilst tensor factorization has been successfully used to capture latent factors (patterns) exhibited by the real-world datasets, the multifaceted nature of smart city data needs an improved modeling to utilize multiple contexts in sparse condition. Thus, in our ongoing research, we aim to model this data with a novel Context-Aware Nonnegative Coupled Sparse Matrix Tensor (CAN-CSMT) framework which imposes sparsity constraint to learn the true factors in sparse data. We also aim to develop a fast and efficient factorization algorithm to deal with the scalability problem persistent in the state-of-the-art factorization algorithms.
eng_Latn
742
Deformed density matrix, Density of entropy and Information problem
Quantum Mechanics at Planck scale is considered as a deformation of the conventional Quantum Mechanics. Similar to the earlier works of the author, the main object of deformation is the density matrix. On this basis a notion of the entropy density is introduced that is a matrix value used for a detail study of the Information Problem at the Universe, and in particular, for the Information Paradox Problem.
Many data analysis problems involve an investigation of relationships between attributes in heterogeneous databases, where different prediction models can be more appropriate for different regions. We propose a technique of integrating global and local random subspace ensemble. We performed a comparison with other well known combining methods on standard benchmark datasets and the proposed technique gave better accuracy.
eng_Latn
743
Effect of CNT and TiC hybrid reinforcement on the micro-mechano-tribo behaviour of aluminium matrix composites
Abstract In the present study Aluminium-3003 alloy metal matrix reinforced with single walled CNTs and TiC were fabricated with stir casting process. Aluminium matrix composites found many applications in aerospace and structural engineering. In this study wt.% of CNT is fixed as 0.5 wt.% and TiC content is varied from (0.5 wt.%–2 wt.%) at an interval of 0.5 wt.% Microstructural, Mechanical and Tribological behaviour of composites were investigated. It is found that with increase in reinforcement content uniform dispersion of particles is found. Density of composites is decreased due to volatile nature of reinforcement particles. Hardness of composites is increased with increase in reinforcement content. Wear rate of composites is lower at lower loads and also at higher reinforcement contents.
Automated understanding of spatio-temporal usage patterns of real-world applications are significant in urban planning. With the capability of smartphones collecting various information using inbuilt sensors, the smart city data is enriched with multiple contexts. Whilst tensor factorization has been successfully used to capture latent factors (patterns) exhibited by the real-world datasets, the multifaceted nature of smart city data needs an improved modeling to utilize multiple contexts in sparse condition. Thus, in our ongoing research, we aim to model this data with a novel Context-Aware Nonnegative Coupled Sparse Matrix Tensor (CAN-CSMT) framework which imposes sparsity constraint to learn the true factors in sparse data. We also aim to develop a fast and efficient factorization algorithm to deal with the scalability problem persistent in the state-of-the-art factorization algorithms.
eng_Latn
744
Controlling orthogonality constraints for better NMF clustering
In this paper we study a variation of a Non-negative Matrix Factorization (NMF) called the Orthogonal NMF(ONMF). This special type of NMF was proposed in order to increase the quality of clustering results of standard NMF by imposing orthogonality on clustering indicator matrix and/or the matrix of basis vectors. We develop an extension of ONMF which we call Weighted ONMF and propose a novel approach for imposing orthogonality on the matrix of basis vectors obtained via NMF using Gram-Schmidt process.
We prove existence of solutions for a class of systems of subelliptic PDEs arising from Mean Field Game systems with H\"ormander diffusion. These results are motivated by the feedback synthesis Mean Field Game solutions and the Nash equilibria of a large class of $N$-player differential games.
eng_Latn
745
Nullspace Approach to Determine the Elementary Modes of Chemical Reaction Systems
The analysis of a chemical reaction network by elementary flux modes is a very elegant method to deal with the stationary states of the system. Each steady state of the network can be represented as a convex combination of these modes. They are elements of the nullspace of the stoichiometry matrix due to the imposed steady-state condition. We propose an approach, which first derives the basis vectors of the nullspace and then calculates the elementary modes by an apt linear combination of the basis vectors. The algorithm exploits the special representation of the nullspace matrix in the space of flows and the fact that elementary modes consist of a minimal set of flows. These two ingredients lead to construction rules, which diminish the combinatorial possibilities to design elementary modes and, hence, reduce the computational costs. Further, we show that the algorithm also accounts for reversible reactions. If a system includes reversible reactions, it can be transformed into an unidirectional network b...
Abstract Recently new optimal Krylov subspace methods have been discovered for normal matrices. In light of this, novel ways to quantify nonnormality are considered in connection with various families of matrices. We use as a criterion how, for a given matrix, these iterative methods introduced can be employed via, e.g., inexpensive matrix factorizations. The unitary orbit of the set of binormal matrices provides a natural extension of normal matrices. Its elements yield polynomially normal matrices of moderate degree. In this context several matrix nearness problems arise.
eng_Latn
746
A Diagonal-Augmented quasi-Newton method with application to factorization machines
We present a novel quasi-Newton method for convex optimization, in which the Hessian estimates are based not only on the gradients, but also on the diagonal part of the true Hessian matrix (which can often be obtained with reasonable complexity). The new algorithm is based on the well known Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm and has similar complexity. The proposed Diagonal-Augmented BFGS (DA-BFGS) method is shown to be stable and achieves a super-linear convergence rate in a local neighborhood of the optimal argument. Numerical experiments on logistic regression and factorization machines problems showcase that DA-BFGS consistently outperforms the baseline BFGS and Newton algorithms.
Abstract The paper is concerned with a reduced SIR model for migrant workers. By using differential inequality technique and a novel argument, we derive a set of conditions to ensure that the endemic equilibrium of the model is globally exponentially stable. The obtained results complement with some existing ones. We also use numerical simulations to demonstrate the theoretical results.
eng_Latn
747
Treatment of intractable hyperemesis gravidarum by ondansetron
Hyperemesis gravidarum is a disabling condition. It is not uncommon that patients request termination of pregnancy because of intolerable symptoms and psychological stress. We report a case in which termination of pregnancy was avoided by the use of ondansetron to treat the hyperemesis gravidarum.
In the present paper, a future cone in the Minkowski space defined in terms of the square-norm of the residual vector for an ill-posed linear system to be solved, is used to derive a nonlinear system of ordinary differential equations. Then the forward Euler scheme is used to generate an iterative algorithm. Two critical values in the critical descent tri-vector are derived, which lead to the largest convergence rate of the resultant iterative algorithm, namely the globally optimal tri-vector method (GOTVM). Some numerical examples are used to reveal the superior performance of the GOTVM than the famous methods of conjugate gradient (CGM) and generalized minimal residual (GMRES). Through the numerical tests we also set forth the rationale by assuming the tri-vector as being a better descent direction.
eng_Latn
748
Efficient Riemannian algorithms for optimization under unitary matrix constraint
In this paper we propose practical algorithms for optimization under unitary matrix constraint. This type of constrained optimization is needed in many signal processing applications. Steepest descent and conjugate gradient algorithms on the Lie group of unitary matrices are introduced. They exploit the Lie group properties in order to reduce the computational cost. Simulation examples on signal separation in MIMO systems demonstrate the fast convergence and the ability to satisfy the constraint with high fidelity.
We consider a system of uniform recurrence equations of dimension 1 and we show how its computation can be carried out using minimal memory size with several synchronous processors. This result is then applied to register minimization for digital circuits and parallel computation of task graphs.
eng_Latn
749
Spatial Filtering/Kernel Density Estimation
Kernel density estimation methods are described and their utility in applications in human geography is discussed. Unweighted and weighted kernel methods, and spatially adaptive methods, are described. Each is illustrated with an example of estimating infant mortality rates in a US city. These methods are briefly compared with alternative methods and some limitations in their use are discussed.
In this paper, aimed at the neutron transport equations of eigenvalue problem under 2-D cylindrical geometry on unstructured grid, the discrete scheme of Sn discrete ordinate and discontinuous finite is built, and the parallel computation for the scheme is realized on MPI systems. Numerical experiments indicate that the designed parallel algorithm can reach perfect speedup, it has good practicality and scalability.
kor_Hang
750
Training-based SLM-realizable composite filter design
Majority-granted nonlinearity-based training for the synthesis of a composite filter is proposed. The motivation is the limited modulation capability of the spatial light modulator (SLM) which is incorporated as a design constraint in the form of a simple thresholding scheme. The technique is simple, and less computationally extensive nevertheless resulting in a robust enough SLM-realizable distortion-invariant correlator. Simulation results are provided for the case of rotation invariant character recognition, which shows better correlation discrimination and robust recognition for a range of distorted characters.
A general purpose block LU preconditioner for saddle point problems is presented. A major difference between the approach presented here and that of other studies is that an explicit, accurate approximation of the Schur complement matrix is efficiently computed. This is used to obtain a preconditioner to the Schur complement matrix which in turn defines a preconditioner for the global system. A number of variants are developed and results are reported for a few linear systems arising from CFD applications.
eng_Latn
751
The reduction of complex dynamical systems using principal interaction patterns
Abstract A method of constructing low-dimensional nonlinear models capturing the main features of complex dynamical systems with many degrees of freedom is described. The system is projected onto a linear subspace spanned by only a few characteristic spatial structures called Principal Interaction Patterns (PIPs). The expansion coefficients are assumed to be governed by a nonlinear dynamical system. The optimal low-dimensional model is determined by identifying spatial modes and interaction coefficients describing their time evolution simultaneously according to a nonlinear variational principle. The algorithm is applied to a two-dimensional geophysical fluid system on the sphere. The models based on Principal Interaction Patterns are compared to models using Empirical Orthogonal Functions (EOFs) as basis functions. A PIP-model using 12 patterns is capable of capturing the long-term behaviour of the complete system monitored by second-order statistics, while in the case of EOFs 17 modes are necessary.
Surface reconstruction from point cloud is of great practical importance in computer graphics. Existing methods often realize reconstruction via a few phases with respective goals, whose integration may not give an optimal solution. In this paper, to avoid the inherent limitations of multi-phase processing in the prior art, we propose a unified framework that treats geometry and connectivity construction as one joint optimization problem. The framework is based on dictionary learning in which the dictionary consists of the vertices of the reconstructed triangular mesh and the sparse coding matrix encodes the connectivity of the mesh. The dictionary learning is formulated as a constrained e2,q-optimization (0
eng_Latn
752
Some remarks on the functional relation between canonical correlation analysis and partial least squares
ABSTRACTThis paper deals with the functional relation between multivariate methods of canonical correlation analysis (CCA), partial least squares (PLS) and also their kernelized versions. Both methods are determined by the solution of the respective optimization problem, and result in algorithms using spectral or singular decomposition theories. The solution of the parameterized optimization problem, where the boundary points of a parameter give exactly the results of CCA (resp. PLS) method leads to the vector functions (paths) of eigenvalues and eigenvectors or singular values and singular vectors. Specifically, in this paper, the functional relation means the description of classes into which the given paths belong. It is shown that if input data are analytical (resp. smooth) functions of a parameter, then the vector functions are also analytical (resp. smooth). Those approaches are studied on three practical examples of European tourism data.
Automated understanding of spatio-temporal usage patterns of real-world applications are significant in urban planning. With the capability of smartphones collecting various information using inbuilt sensors, the smart city data is enriched with multiple contexts. Whilst tensor factorization has been successfully used to capture latent factors (patterns) exhibited by the real-world datasets, the multifaceted nature of smart city data needs an improved modeling to utilize multiple contexts in sparse condition. Thus, in our ongoing research, we aim to model this data with a novel Context-Aware Nonnegative Coupled Sparse Matrix Tensor (CAN-CSMT) framework which imposes sparsity constraint to learn the true factors in sparse data. We also aim to develop a fast and efficient factorization algorithm to deal with the scalability problem persistent in the state-of-the-art factorization algorithms.
eng_Latn
753
Bending of light Ray in radially varying refractive index If light passes near a massive object, the gravitational interaction causes a bending of the ray. This can be thought of as happening due to a change in the effective refrative index of the medium given by $$n(r) = 1 + 2 \frac{GM}{rc^2}$$ where $r$ is the distance of the point of consideration from the centre of the mass of the massive body, G is the universal gravitational constant, M the mass of the body and c the speed of light in vacuum. Considering a spherical object find the deviation of the ray from the original path as it grazes the object.
How can gravity affect light? I understand that a black hole bends the fabric of space time to a point that no object can escape. I understand that light travels in a straight line along spacetime unless distorted by gravity. If spacetime is being curved by gravity then light should follow that bend in spacetime. In Newton's Law of Universal Gravitation, the mass of both objects must be entered, but photon has no mass, why should a massless photon be affected by gravity in by Newton's equations? What am I missing?
How exactly to compute the ridge regression penalty parameter given the constraint? The accepted answer in does a great job of showing that there is a one-to-one correspondence between $c$ and $\lambda$ in the two formulations of the ridge regression: $$ \underset{\beta}{min}(y-X\beta)^T(y-X\beta) + \lambda\beta^T\beta $$ and $$ \underset{\beta}{min}(y-X\beta)^T(y-X\beta) \text{ s.t. }\beta^T\beta\leq{c} $$ The linked answer shows this in the orthogonal case. In a general (non-orthogonal case), how can I compute $\lambda$ from $c$? Update Here is an answer for going from $\lambda$ to $c$: Assuming that the coefs are constrained by the penalty, $$ \beta^T\beta = c $$ and $$ \beta = (X^TX + \lambda I)^{-1}X^Ty\\ \beta^T\beta = c = \beta^T(X^TX + \lambda I)^{-1}X^Ty $$ Still working on going the other way
eng_Latn
754
How to use SVD for dimensionality reduction After reading several "tutorials" on SVD I am left still wondering how to use it for dimensionality reduction. Here is my confusion in an applied setting. If I limit svd to only considering the first two singular values / vectors and "recreate" the matrix, the dimensionality is still the same (4 columns). What should be done here to instead only use 2 columns? data(iris) s&lt;-svd(iris[,-5]) u&lt;-as.matrix(s$u[,1:2]) v&lt;-as.matrix(s$v[,1:2]) d&lt;-as.matrix(diag(sing$d)[1:2, 1:2]) s2&lt;-u%*%d%*%t(v)
Relationship between SVD and PCA. How to use SVD to perform PCA? Principal component analysis (PCA) is usually explained via an eigen-decomposition of the covariance matrix. However, it can also be performed via singular value decomposition (SVD) of the data matrix $\mathbf X$. How does it work? What is the connection between these two approaches? What is the relationship between SVD and PCA? Or in other words, how to use SVD of the data matrix to perform dimensionality reduction?
Before running a ridge regression model, do I need to preform variable selection? I am currently constructing a model that uses last year's departmental information to predict employee churn for the current year. I have 55 features and 318 departments in my data set. A good portion of my independent variables are correlated, and because of this, I believe that performing a ridge regression on my data will lead to optimal predictions when I bring the model into production. I have studied ridge regression and understand that the lambda coefficient computed for a given predictor can minimize the effect that predictor has on the model to next to nothing. Does this mean that performing a ridge regression means I don't have to bother with variable selection? If I do need to perform a variable selection technique, would implementing a stepwise regression and then using those selected variables in my ridge regression be a valid approach at variable selection? I already posted this question on stack exchange but was informed that stack exchange was the better platform to ask statistical questions. I am sorry for the confusion.
eng_Latn
755
The Variational Gaussian Approximation Revisited
Gaussian Processes for Machine Learning
Beam selection for performance-complexity optimization in high-dimensional MIMO systems
eng_Latn
756
Observational transversal study to relate functional status and age with the doublet or triplet chemotherapy based on capecitabine in advanced gastric cancer patients.
59 Background: The available evidence suggests that selection of treatment for advanced gastric cancer (AGC) correlates with age and ECOG PS. This study was conducted to analyze whether previously mentioned variables are relevant for the choice of doublet or triplet regimens with capecitabine and determining prognosis. Methods: Multicenter, cross-sectional, observational study in patients with AGC who received at least 2-cycles of capecitabine-based doublet or triplet chemotherapy, with or without measurable disease. The age, as a continuous and categorical (> 64 vs ≤ 64) variable, and ECOG PS were analyzed by logistic regression. Results: A total of 175 patients were evaluated. Median age 65.5 (56-72) years, male: 68% ECOG 0/1/2: 32.7%/55.6%/11.1%. 33% underwent doublet and 67% triplet chemotherapy. Tumour histology: signet-ring cell carcinoma (29%), papillary (13%), mucinous (12%) and tubular (3.5%). Most common sites of metastases: lymph nodes (48%), peritoneum (41%), liver (38%) and lung (12%). Multiv...
In the present paper, a future cone in the Minkowski space defined in terms of the square-norm of the residual vector for an ill-posed linear system to be solved, is used to derive a nonlinear system of ordinary differential equations. Then the forward Euler scheme is used to generate an iterative algorithm. Two critical values in the critical descent tri-vector are derived, which lead to the largest convergence rate of the resultant iterative algorithm, namely the globally optimal tri-vector method (GOTVM). Some numerical examples are used to reveal the superior performance of the GOTVM than the famous methods of conjugate gradient (CGM) and generalized minimal residual (GMRES). Through the numerical tests we also set forth the rationale by assuming the tri-vector as being a better descent direction.
eng_Latn
757
Actually, i have 540x46 matrix, (540 observations and 46 features) and after using PCA by considering 95% variance, it is reduced to 540x12 matrix. So, is it possible to know which 12 features from 46 are they? and order of these 12 features according to dominance level? I hope now question is clear and please let me know if you need any other information.
I'm new to feature selection and I was wondering how you would use PCA to perform feature selection. Does PCA compute a relative score for each input variable that you can use to filter out noninformative input variables? Basically, I want to be able to order the original features in the data by variance or amount of information contained.
The new Top-Bar does not show reputation changes from Area 51.
eng_Latn
758
High electric fields in DC poled glasses examined with the LIPP technique
We have investigated the depletion layer generated in DC poled soda-lime glasses using the LIPP technique. The results are interpreted taking advantage of the chemical surface analysis already carried out.
Matrix concentration inequalities have attracted much attention in diverse applications such as linear algebra, statistical estimation, combinatorial optimization, etc. In this paper, we present new Bernstein concentration inequalities depending only on the first moments of random matrices, whereas previous Bernstein inequalities are heavily relevant to the first and second moments. Based on those results, we analyze the empirical risk minimization in the presence of label noise. We find that many popular losses used in risk minimization can be decomposed into two parts, where the first part won't be affected and only the second part will be affected by noisy labels. We show that the influence of noisy labels on the second part can be reduced by our proposed LICS (Labeled Instance Centroid Smoothing) approach. The effectiveness of the LICS algorithm is justified both theoretically and empirically.
eng_Latn
759
Cramér-Rao Bound for Line Constrained Trajectory Tracking
In this paper, target tracking constrained to short-term linear trajectories is explored. The problem is viewed as an extension of the matrix decomposition problem into low-rank and sparse components by incorporating an additional line constraint. The Cramer-Rao Bound (CRB) for the trajectory estimation is derived; numerical results show that an alternating algorithm which estimates the various components of the trajectory image is near optimal due to proximity to the computed CRB. In addition to the theoretical contribution of incorporating an additional constraint in the estimation problem, the alternating algorithm is applied to real video data and shown to be effective in estimating the trajectory despite it not being exactly linear.
The purpose of the paper is to give a complete characterization of the continuity of lower envelopes in the infinite dimensional spaces in terms of the notion of c-regularity. As an application we introduce a variational unconstrained vector optimization problem for smooth functions and characterize when the variational steepest descent directions are continuous in terms of the generating sets which are considered.
eng_Latn
760
Total variation projection with first order schemes
This paper proposes a new class of algorithms to compute the projection onto the set of images with a total variation bounded by a constant. The projection is computed on a dual formulation of the problem that is minimized using either a one-step gradient descent method or a multi-step Nesterov scheme. This yields iterative algorithms that compute soft thresholding of the dual vector fields. We show the convergence of the method with a convergence rate of O(1/k) for the one step method and O(1/k2) for the multi-step one, where k is the iteration number. The projection algorithm can be used as a building block in several applications, and we illusrtate it by solving linear inverse problems under total variation constraint. Numerical results show that our algorithm competes favorably with state-of-the-art TV projection methods to solve denoising, inpainting and deblurring problems.
We prove that a discrete series representations of metaplectic group over a non-archimedean local eld has a generic theta lift on the split odd orthogonal tower if and only if it is generic. Also, we determine the rst occurrence indices of such representations and describe the structure of their theta lifts.
eng_Latn
761
The method of micro-displacement measurement to improve the space resolution of array detector.
This paper introduces the method of micro-displacement measurement to improve the space resolution of limited size array detector. This method could also be applied in various research areas, especially in image measurement of nuclear detection.
Abstract We show how stability of models can be guaranteed when using the class of identification algorithms which have become known as ‘subspace methods’. In many of these methods the ‘ A ’ matrix is obtained (or can be obtained) as the product of a shifted matrix with a pseudo-inverse. We show that whenever the shifted matrix is formed by introducing one block of zeros in the appropriate position, then a stable model results. The cost of this is some (possibly large) distortion of the results, but in some applications that is outweighed by the advantage of guaranteed stability.
eng_Latn
762
A facile stereoselective synthesis of α-glycosyl ureas
α-Glycosyl ureas can be synthesised directly from tetra-O-benzyl glycosyl azides and isocyanates, using a one-pot procedure that is simple and general in scope. The benzyl protecting groups are easily removed from the urea products by catalytic hydrogenation. The synthesised α-glycosyl ureas represent a new class of neo-glycoconjugates with the potential of being resistant towards carbohydrate processing enzymes.
In this paper we propose practical algorithms for optimization under unitary matrix constraint. This type of constrained optimization is needed in many signal processing applications. Steepest descent and conjugate gradient algorithms on the Lie group of unitary matrices are introduced. They exploit the Lie group properties in order to reduce the computational cost. Simulation examples on signal separation in MIMO systems demonstrate the fast convergence and the ability to satisfy the constraint with high fidelity.
eng_Latn
763
Robust hierarchical image representation using non-negative matrix factorisation with sparse code shrinkage preprocessing
When analysing patterns, our goals are (i) to find structure in the presence of noise, (ii) to decompose the observed structure into sub-components, and (iii) to use the components for pattern completion. Here, a novel loop architecture is introduced to perform these tasks in an unsupervised manner. The architecture combines sparse code shrinkage with non-negative matrix factorisation, and blends their favourable properties: sparse code shrinkage aims to remove Gaussian noise in a robust fashion; non-negative matrix factorisation extracts substructures from the noise filtered inputs. The loop architecture performs robust pattern completion when organised into a two-layered hierarchy. We demonstrate the power of the proposed architecture on the so-called `bar-problem' and on the FERET facial database.
Consider arbitrary collections A = a_1,a_2,.. .,a_n of items and Q = q_1,q_2,...,q_m (1 leqslant mn leqslant n) of queries from a totally ordered universe. The multiple rank problem involves computing for every query qi the number of items in A that have a lesser value. Our contribution is to show that the problem at hand can be solved time-optimally on meshes with multiple broadcasting. More specifically, if the collection A is siored in some order one item per processor and if Q is stored one query per processor in the leftmost frac{m} {{sqrt n }} columns of a mesh with multiple broadcasting of size sqrt n x /sqrt n, the corresponding instance of the multiple rank problem can be solved in Theta left( {m^{frac{1} {3}} n^{frac{1} {6}} } right) time. As an application we present a time-optimal algorithm to compute the histogram of a m-level gray image of size sqrt n x sqrt n in Theta left( {m^{frac{1} {3}} n^{frac{1} {6}} } right) time.
eng_Latn
764
Dimension Reduction of Large-Scale Systems
In the past decades, model reduction has become an ubiquitous tool in analysis and simulation of dynamical systems, control design, circuit simulation, structural dynamics, CFD, and many other disciplines dealing with complex physical models. The aim of this book is to survey some of the most successful model reduction methods in tutorial style articles and to present benchmark problems from several application areas for testing and comparing existing and new algorithms. As the discussed methods have often been developed in parallel in disconnected application areas, the intention of the mini-workshop in Oberwolfach and its proceedings is to make these ideas available to researchers and practitioners from all these different disciplines.
Many data analysis problems involve an investigation of relationships between attributes in heterogeneous databases, where different prediction models can be more appropriate for different regions. We propose a technique of integrating global and local random subspace ensemble. We performed a comparison with other well known combining methods on standard benchmark datasets and the proposed technique gave better accuracy.
yue_Hant
765
Global minimization of the active contour model with TV-Inpainting and two-phase denoising
The active contour model [8,9,2] is one of the most well-known variational methods in image segmentation. In a recent paper by Bresson et al. [1], a link between the active contour model and the variational denoising model of Rudin-Osher-Fatemi (ROF) [10] was demonstrated. This relation provides a method to determine the global minimizer of the active contour model. In this paper, we propose a variation of this method to determine the global minimizer of the active contour model in the case when there are missing regions in the observed image. The idea is to turn off the L1-fidelity term in some subdomains, in particular the regions for image inpainting. Minimizing this energy provides a unified way to perform image denoising, segmentation and inpainting.
We present a 4-approximation algorithm for the problem of placing the fewest guards on a 1.5D terrain so that every point of the terrain is seen by at least one guard. This improves on the currently best approximation factor of 5 (J. King, 2006). Unlike most of the previous techniques, our method is based on rounding the linear programming relaxation of the corresponding covering problem. Besides the simplicity of the analysis, which mainly relies on decomposing the constraint matrix of the LP into totally balanced matrices, our algorithm, unlike previous work, generalizes to the weighted and partial versions of the basic problem.
eng_Latn
766
Distribution and Evolution Characteristics of China's Iodine-rich Brines:3.Formation Condition of the Brines and Iodine Explonation Orienlation
Based on extensive literature,the paper deals with the distribution of China's iodine resources as well as the geological conditions for its formation.In combination with the evolution,storage types and distribution features of petroleum and natural gases,the author summarized the distribution of indine-rich brines and its formation conditions.Further,the orientation for exploring iodine resources was proposed.
We present an incremental approach to 2-norm estimation for triangular matrices. Our investigation covers both dense and sparse matrices which can arise for example from a QR, a Cholesky or a LU factorization. If the explicit inverse of a triangular factor is available, as in the case of an implicit version of the LU factorization, we can relate our results to incremental condition estimation (ICE). Incremental norm estimation (INE) extends directly from the dense to the sparse case without needing the modifications that are necessary for the sparse version of ICE. INE can be applied to complement ICE, since the product of the two estimates gives an estimate for the matrix condition number. Furthermore, when applied to matrix inverses, INE can be used as the basis of a rank-revealing factorization.
eng_Latn
767
New nonlinear algorithms for finite element analysis of 2D and 3D magnetic fields
New nonlinear algorithms are presented that use the given material B‐H curve directly, rather than converting it to a reluctivity ν=(H/B) vs B2 curve as is common. In addition to full Newton–Raphson iteration, also discussed are modified Newton–Raphson iteration, quasi‐Newton iteration, and line search algorithms. Full Newton–Raphson iteration with the new direct B‐H algorithm is shown in most typical small 3D and 2D magnetostatic problems to achieve convergence in a much smaller number of iterations than the ν vs B2 algorithm.
This paper proposes a novel technique for learning face features based on Bayesian regularized non-negative matrix factorization with Itakura-Saito (IS) divergence (B-NMF). In this paper, we show, the proposed technique not only explicitly incorporates the notion of ‘Bayesian regularized prior’ which imposes onto the features learning but also holds the property of scale invariant that enables lower energy components in the learning process to be treated with equal importance as the high energy components. Real test has been conducted and the obtained results are very encouraging.
eng_Latn
768
Risk minimization in the presence of label noise
Matrix concentration inequalities have attracted much attention in diverse applications such as linear algebra, statistical estimation, combinatorial optimization, etc. In this paper, we present new Bernstein concentration inequalities depending only on the first moments of random matrices, whereas previous Bernstein inequalities are heavily relevant to the first and second moments. Based on those results, we analyze the empirical risk minimization in the presence of label noise. We find that many popular losses used in risk minimization can be decomposed into two parts, where the first part won't be affected and only the second part will be affected by noisy labels. We show that the influence of noisy labels on the second part can be reduced by our proposed LICS (Labeled Instance Centroid Smoothing) approach. The effectiveness of the LICS algorithm is justified both theoretically and empirically.
The paper is concemed with the design of robust guaranteed cost controller with H, - y disturbance attenuation performance for linear systems with norm bounded parameter uncertainties and disturbances.
eng_Latn
769
Algorithms for smoothing data on the sphere with tensor product splines
Algorithms are presented for fitting data on the sphere by using tensor product splines which satisfy certain boundary constraints. First we consider the least-squares problem when the knots are given. Then we discuss the construction of smoothing splines on the sphere. Here the knots are located automatically. A Fortran IV implementation of these two algorithms is described.
In this paper, we provide theoretical analysis for a cubic regularization of Newton method as applied to unconstrained minimization problem. For this scheme, we prove general local convergence results. However, the main contribution of the paper is related to global worst-case complexity bounds for different problem classes including some nonconvex cases. It is shown that the search direction can be computed by standard linear algebra technique.
eng_Latn
770
A Local Level-Set Concept for Front Tracking on Arbitrary Grids
This paper proposes a general multi-dimensional front tracking concept for arbitrary physical problems. The tracking method is based on the level-set approach with a restricted dynamic definition range in the vicinity of the fronts. Special attention is drawn to the problem of the classical level-set method, i.e. accuracy issues and topological restrictions. In this concern, a less sensitive time integration is introduced and the problem of interacting discontinuities is addressed. The concept is integrated in the basic modular Finite-Volume solution package MOUSE [1] for systems of conservation laws on arbitrary grids.
Matrix concentration inequalities have attracted much attention in diverse applications such as linear algebra, statistical estimation, combinatorial optimization, etc. In this paper, we present new Bernstein concentration inequalities depending only on the first moments of random matrices, whereas previous Bernstein inequalities are heavily relevant to the first and second moments. Based on those results, we analyze the empirical risk minimization in the presence of label noise. We find that many popular losses used in risk minimization can be decomposed into two parts, where the first part won't be affected and only the second part will be affected by noisy labels. We show that the influence of noisy labels on the second part can be reduced by our proposed LICS (Labeled Instance Centroid Smoothing) approach. The effectiveness of the LICS algorithm is justified both theoretically and empirically.
eng_Latn
771
Comparison of Pressure/Flow Studies with Micturitional Urethral Pressure Profiles in the Diagnosis of Urinary Outflow Obstruction
Summary— Computer technology has made it possible significantly to improve the technique and interpretation of the micturitional urethral pressure profile (MUPP). Thirty-nine patients with lower urinary tract symptoms have been investigated by this technique and the results compared with those of standard pressure/flow studies. A good correlation was found between the two methods of diagnosing outflow obstruction, but micturitional urethral pressure profiles offered practical advantages in patients who were elderly, immobile or who had severe involuntary voiding, and diagnostic advantages in patients with absent or poor detrusor contractility and those with equivocal pressure/flow studies.
This paper on performance analysis of parameter estimation is motivated by a practical consideration that the data length is finite. In particular, for time-varying systems, we study the properties of the well-known forgetting factor least-squares (FFLS) algorithm in detail in the stochastic framework, and derive upperbounds and lowerbounds of the parameter estimation errors (PEE), using directly the finite input-output data. The analysis indicates that the mean square PEE upperbounds and lowerbounds of the FFLS algorithm approach two finite positive constants, respectively, as the data length increases, and that these PEE upperbounds can be minimized by choosing appropriate forgetting factors. We further show that for time-invariant systems, the PEE upperbounds and lowerbounds of the ordinary least-squares algorithm both tend to zero as the data length increases. Finally, we illustrate and verify the theoretical findings with several example systems, including an experimental water-level system.
eng_Latn
772
Solution phase isoelectric fractionation in the multi-compartment electrolyser: A divide and conquer strategy for the analysis of complex proteomes
Sample complexity frequently interferes with the analysis of low-abundance proteins by two-dimensional gel electrophoresis (2DGE). Ideally, high abundance proteins should be removed, allowing low-abundance proteins to be applied at much higher concentrations than is possible with the unfractionated sample. One approach is to partition the sample in a manner that segregates the bulk of extraneous proteins from the protein(s) of interest. Solution phase isoelectric focusing in the multi-compartment electrolyser generates fractions of discrete isoelectric point (pI) intervals allowing isolated narrow segments of a proteome to be analysed individually by 2DGE. It is particularly useful for the isolation of low-abundance proteins of extremely basic or acidic pI.
This paper present a comparison of three main algorithms generating initial codebooks with reference to the convergence accuracy and speed. With both the elements considered, splitting methods are found to be more advantageous. Through a study of the common splitting method, an improved one-the orthogonal increment splitting algorithm is proposed. The new method can better utilize the symmetrical property of the speech or image sources, and endow the codewords of the initial codebook with a better distribution in the source space compared with the previous splitting method. The experimental results have confirmed the advantages of the new method, which can effectively eliminate the empty cells and non-typical codewords that may appear in the iteration process, thus achieving a better performance in both convergence speed and accuracy.
eng_Latn
773
Low frequency acoustic response of a periodic layer of spherical inclusions in an elastic solid to a normally incident plane longitudinal wave
The influence of particle mass density on the reflection and transmission spectra of a plane longitudinal wave normally incident on a periodic (square) array of identical spherical particles in a polyester matrix are measured at wavelengths which are comparable to the particle radius and the inter-particle distance. The spectra are characterized by several resonances whose frequencies are close to the cut-off frequencies for the shear wave modes, which are analogs of spectral orders in diffraction gratings. Arrays of heavy particles (lead and steel) exhibit a pronounced resonance anomaly which occurs when the lattice resonant frequency is close to the frequency of the rigid body translation (dipole) resonance of an isolated sphere in an unbounded matrix. An approximate low frequency theory is developed which takes into account the multiple scattering effect. The theory shows good comparison with the experimental data for arrays with particle area fractions as high as 32%.
Matrix concentration inequalities have attracted much attention in diverse applications such as linear algebra, statistical estimation, combinatorial optimization, etc. In this paper, we present new Bernstein concentration inequalities depending only on the first moments of random matrices, whereas previous Bernstein inequalities are heavily relevant to the first and second moments. Based on those results, we analyze the empirical risk minimization in the presence of label noise. We find that many popular losses used in risk minimization can be decomposed into two parts, where the first part won't be affected and only the second part will be affected by noisy labels. We show that the influence of noisy labels on the second part can be reduced by our proposed LICS (Labeled Instance Centroid Smoothing) approach. The effectiveness of the LICS algorithm is justified both theoretically and empirically.
eng_Latn
774
Combination of weighted ℓ2, 1 minimization with unitary transformation for DOA estimation
Abstract Using the centro-symmetry property of uniform linear array (ULA), we propose an algorithm that combines the weighted l 2 , 1 minimization with the unitary transformation to improve the performance of DOA estimation. Exploiting the result of the unitary transformation, more credible weights can be obtained and the jointly sparse constraint can be further enhanced. Moreover, the unitary transformation incorporates the forward–backward spatial smoothing, which improves the performance of the weighted l 2 , 1 minimization for correlated sources. Simulations demonstrate that the proposed method can achieve better performance in terms of resolution and estimation accuracy.
Abstract This article develops a solution methodology for project time compression problems in CPM/PERT type networks with convex or concave activity cost-duration functions. The proposed procedure actually approximates these relationships by piece-wise linear time-cost curves. The solution procedure is based on the Benders decomposition approach and seeks to minimize die total direct cost of a project subject to activity precedence relationships, as well as upper/lower bounds on activity durations. The computational efficiency of the proposed decomposition methodology is also discussed.
eng_Latn
775
Method of convex rigid frames and applications in studies of multipartite quNit pure states
In this letter, we suggest a method of convex rigid frames in the studies of multipartite quNit pure states. We illustrate what the convex rigid frames are, and what is their method. As applications, we use this method to solve some basic problems and give some new results (three theorems): the problem of the partial separability of the multipartite quNit pure states and its geometric explanation; the problem of the classification of multipartite quNit pure states, giving a perfect explanation of the local unitary transformations; thirdly, we discuss the invariants of classes and give a possible physical explanation.
Abstract. One of the goals, in the context of nonparametric regression by smoothing spline functions, is to choose the optimal value for the smoothing parameter. In this paper, we deal with the cross validation method(CV), as a performance criteria for smoothing parameter selection. First, we implement a CV-based algorithm, in Matlab 6.5 medium and we apply it on a test function, in order to emphase the quality of the fitting by the CV-smoothing spline function. Then, we fit some real data with this kind of function.
eng_Latn
776
An FPGA-based design for real-time Super Resolution Reconstruction
Since several decades, the camera spatial resolution is gradually increasing with the CMOS technology evolution. The image sensors provide more and more pixels, generating new constraints for the suitable optics. As an alternative, promising solutions propose Super Resolution (SR) image reconstruction to extend the image size without modifying the sensor architecture. Convincing state-of art studies demonstrate that these methods could even be implemented in real-time. Nevertheless, artifacts can be observed in highly textured areas of the image. In this paper, we propose a Local Adaptive Spatial Super Resolution (LASSR) method to fix this limitation. A real-time texture analysis is included in a spatial super resolution scheme. The proposed local approach enables adjustments to intrinsic heterogeneity of image texture. A FPGA-based implementation of the proposed method is then presented and enables high quality 4k super resolution videos to be performed at 16 fps, from a standard 2K video flow.
In this paper, we investigate the problem of recovering positive semi-definite (PSD) matrix from 1-bit sensing. The measurement matrix is rank-1 and constructed by the outer product of a pair of vectors, whose entries are independent and identically distributed (i.i.d.) Gaussian variables. The recovery problem is solved in closed form through a convex programming. Our analysis reveals that the solution is biased in general. However, in case of error-free measurement, we find that for rank-r PSD matrix with bounded condition number, the bias decreases with an order of O(1/r). Therefore, an approximate recovery is still possible. Numerical experiments are conducted to verify our analysis.
eng_Latn
777
Weighted combination for multiple fixes
Fusion of multiple fixes is an important in determining signal source location. Various fusion algorithms for multiple fixes and fix with bearing have been developed. However, the result from the commonly used fusion algorithm, which is normalized by error covariance matrix, is largely affected by the size and orientation of the respective error ellipse. In some applications, this could deduce an unreasonable estimation as the fused point may be largely biased from the centroid of these fixes. This paper first presents a unified fusion algorithm with weighted combination for multiple fixes, and then by taking various typical types of weights, including matrix or scale weights, we specify the formulations in these cases for different scenarios and purposes. Fusion of fix with bearing is also brought up as it is an important case to show the necessity for weighted fusion. Testing using a high-fidelity simulation system demonstrated that the algorithm well performed to realize different expectations.
Abstract Necessary conditions and iterative computational algorithms are obtained for the problem of choosing feedback gains to optimize a linear control system with a quadratic cost function. The results are obtained in abstract terms and cover a wide range of practical problems. A design example is given
eng_Latn
778
Momentum Principal Skewness Analysis
Principal skewness analysis (PSA) has been introduced to the remote sensing community recently, which is equivalent to fast independent component analysis (FastICA) when skewness is considered as a non-Gaussian index. However, similar to FastICA, PSA also has the nonconvergence problem in searching for optimal projection directions. In this letter, we propose a new iteration strategy to alleviate PSA's nonconvergence problem, and we name this new version of PSA as momentum PSA (MPSA). MPSA still adopts the same fixed-point algorithm as PSA does. Different from PSA, the $(k+1)$ th result in the iteration process of MPSA not only depends on the $k$ th iteration result but also is related to the $(k-1)$ th iteration. Experiments conducted for both simulated data and real-world hyperspectral image demonstrate that MPSA has an obvious advantage over PSA in convergence performance and computational speed.
We prove that on an atomless probability space, the worst-case mean squared error of the Monte-Carlo estimator is minimal if the random points are chosen independently.
pol_Latn
779
A Model for Structural Vulnerability Analysis of Shipboard Power System Based on Complex Network Theory
A structural vulnerability analysis method of shipboard power system based on complex network is proposed in this paper. In this paper, the shipboard power system is modeled as a complex network, and topological characteristics of shipboard power grid are analyzed. Two structural vulnerability criterions, which are electric betweeness of node and cabin respectively, are defined to evaluate the importance of network elements and element groups in a cabin, and analysis structural vulnerability. The results of attack tests on a typical shipboard power system show that the structural vulnerability of shipboard power system has close relationship with topological structure and the cabin betweenness can effectively identify the structural vulnerability of shipboard power system.
Surface reconstruction from point cloud is of great practical importance in computer graphics. Existing methods often realize reconstruction via a few phases with respective goals, whose integration may not give an optimal solution. In this paper, to avoid the inherent limitations of multi-phase processing in the prior art, we propose a unified framework that treats geometry and connectivity construction as one joint optimization problem. The framework is based on dictionary learning in which the dictionary consists of the vertices of the reconstructed triangular mesh and the sparse coding matrix encodes the connectivity of the mesh. The dictionary learning is formulated as a constrained e2,q-optimization (0
kor_Hang
780
Electroreductive Deposition of Au Clusters Modified with an Anthraquinone Derivative
Anthraquinone derivative-modified Au clusters prepared by a substitution reaction of octyl thiolate-covered Au clusters with 1-(1,8-dithiaoctyl)anthracene-9,10-dione undergo a two-step one-electron reduction in aprotic solvents, resulting in the formation of an electroactive thin Au cluster film. Composite film formation is achieved by a combination of oxidative and reductive electrodeposition of biferrocene derivative-modified and anthraquinone derivative-modified Au clusters, respectively.
Jointly Gaussian memoryless sources are observed at N distinct terminals. The goal is to efficiently encode the observations in a distributed fashion so as to enable reconstruction of any one of the observations, say the first one, at the decoder subject to a quadratic fidelity criterion. Our main result is a precise characterization of the rate-distortion region when the covariance matrix of the sources satisfies a ?tree-structure? condition. In this situation, a natural analog-digital separation scheme optimally trades off the distributed quantization rate tuples and the distortion in the reconstruction: each encoder consists of a point-to-point Gaussian vector quantizer followed by a Slepian-Wolf binning encoder. We also provide a partial converse that suggests that the tree-structure condition is fundamental.
eng_Latn
781
Naphthoquinone derivatives as tau aggregation inhibitors for the treatment of Alzheimer's and related neurodegenerative disorders
Neurodegenerative diseases (e.g., Alzheimer's disease) protein with the (e.g., tau) provides naphthoquinone type compound can be used to modulate the aggregation of. Structure Function Evaluation of oxidized and reduced forms naphthoquinone type compound such as menadione related compounds are disclosed. The present invention further provides treatment and prevention methods of neurodegenerative diseases and / or clinical dementia based on the compounds of the present invention.
For very large data sets, when the problem of classification is dimensionally large, it is known that the present neural network weight training algorithms take a very long time. It thus becomes of paramount importance to address the issue of weight training in multilayer continuous feed forward networks using back-propagation. A novel pseudo-inverse based methodology for the weight training of the neural net is proposed in this paper and implemented. It is also found that the steepness of the thresholding function for the neurons in the net is a special contributing factor to the issue of convergence and the stability of the net. In order to study the effect of this, a Qlearning scheme for the "lambda training" is proposed and implemented in parallel with the net. The algorithm is tested quite extensively on a variety of problems like the XOR problem and the encoder/decoder problem and it is found on examination of the results that the algorithm does quite well in comparison to most standard algorithms.
eng_Latn
782
Polynomial-Time Random Generation of Uniform Real Matrices in the Spectral Norm Ball
Abstract This paper follows the line of research aimed to develop randomized algorithms for probabilistic analysis and design of control systems. In particular, a result for the generation of real matrix samples uniformly distributed in the spectral norm ball is presented. To this end, the distribution of the singular values of random uniform matrices is first studied. The sample generation is then based on the conditional densities method. A key result of this paper is the computation in closed form of the marginal densities of the singular values, leading to a polynomial-time method for the real matrix sample generation problem.
This work investigates the global mosaic pattern and spatial entropy for one-dimensional cellular neural network (CNN). A novel method is developed to partition the parameter space into finitely many regions. The CNNs, with parameters in each region, have the same global pattern. An algorithm is also presented to evaluate the spatial entropy.
eng_Latn
783
The iodine number and the unsaturation number of fats
In order to determine the degree of unsaturation of fats and to study the kinetics of their hydrogenation, it is proposed to use instead of the iodine number an index which is called the unsaturation number and shows the total number of double bonds in 100 molecules of fatty acids.
We present an incremental approach to 2-norm estimation for triangular matrices. Our investigation covers both dense and sparse matrices which can arise for example from a QR, a Cholesky or a LU factorization. If the explicit inverse of a triangular factor is available, as in the case of an implicit version of the LU factorization, we can relate our results to incremental condition estimation (ICE). Incremental norm estimation (INE) extends directly from the dense to the sparse case without needing the modifications that are necessary for the sparse version of ICE. INE can be applied to complement ICE, since the product of the two estimates gives an estimate for the matrix condition number. Furthermore, when applied to matrix inverses, INE can be used as the basis of a rank-revealing factorization.
eng_Latn
784
Modelling and analysis of a tricopter
Unmanned aerial vehicles (UAVs) can generally be defined as a “device used or intended to be used for flight in the air that has no on-board pilot”. Tricopter is one such UAV. Here we present new results to compute the kinematic and dynamic analysis for a tricopter mini-rotorcraft. The orientation and control of tricopter according to the parametric equation is also shown. The transformation of all the parameters from one co-ordinate to another co-ordinate is done. It include body frame of reference and earth frame of reference. Hence mathematical modelling of the tricopter is done. The relation or dependence of position on the angular speed of motor/s is shown by plotting three dimensional position versus time graph in MATLAB by considering different cases. A well dimensioned and rendered CAD 3D model of the intended tricopter is designed in CATIA and further rendered in Autodesk Showcase.
Surface reconstruction from point cloud is of great practical importance in computer graphics. Existing methods often realize reconstruction via a few phases with respective goals, whose integration may not give an optimal solution. In this paper, to avoid the inherent limitations of multi-phase processing in the prior art, we propose a unified framework that treats geometry and connectivity construction as one joint optimization problem. The framework is based on dictionary learning in which the dictionary consists of the vertices of the reconstructed triangular mesh and the sparse coding matrix encodes the connectivity of the mesh. The dictionary learning is formulated as a constrained e2,q-optimization (0
eng_Latn
785
On eigenstructure assignment by gain output feedback
In this paper, the eigenstructure assignment of linear multivariable control systems is studied from a geometric point of view. For the class of systems in which the number of outputs plus the number of inputs exceeds the number of states, genericity properties relative to this problem are derived. It is shown, without any assumption on the genericity of the system, that the pole assignment can be carried out by choosing some closed-loop eigenvectors almost freely. The crucial point is that all the expected degrees of freedom in pole assignment are so described without redundancy, which fully justifies the practical interest of such techniques. Despite the technicality required for the derivation of intermediate results, the main result, which is an eigenstructure assignment algorithm, is very easy to implement because it is only based on the computation of certain sums and intersections of characteristic subspaces. Furthermore, it is shown how the basic tools developed here can be used to tackle the prob...
Matrix concentration inequalities have attracted much attention in diverse applications such as linear algebra, statistical estimation, combinatorial optimization, etc. In this paper, we present new Bernstein concentration inequalities depending only on the first moments of random matrices, whereas previous Bernstein inequalities are heavily relevant to the first and second moments. Based on those results, we analyze the empirical risk minimization in the presence of label noise. We find that many popular losses used in risk minimization can be decomposed into two parts, where the first part won't be affected and only the second part will be affected by noisy labels. We show that the influence of noisy labels on the second part can be reduced by our proposed LICS (Labeled Instance Centroid Smoothing) approach. The effectiveness of the LICS algorithm is justified both theoretically and empirically.
eng_Latn
786
A novel numerical method to determine the algebraic multiplicity of nonlinear eigenvalues
We generalize the algebraic multiplicity of the eigenvalues of nonlinear eigenvalue problems (NEPs) to the rational form and give the extension of the argument principle. In addition, we propose a novel numerical method to determine the algebraic multiplicity of the eigenvalues of the NEPs in a given region by the contour integral method. Finally, some numerical experiments are reported to illustrate the effectiveness of our method.
Abstract This paper is the first of a two part series that reviews and critiques several identification algorithms for fuzzy relational matrices. Part 1 reviews and evaluates algorithms that do not optimize or minimize a specified performance criteria [3,9,20,24]. It compliments and extends a recent comparative identification analysis by Postlethwaite [17]. Part 2 [1] evaluates algorithms that optimize or minimize a specified performance criteria [6,8,23,26]. The relational matrix, learned by each algorithm from the Box–Jenkins gas furnace data [2], is compared for effectiveness of the prediction based on a minimum distance from actual. A new, non-optimized identification algorithm with an on-line formulation that guarantees the completeness of the relational matrix, if sufficient learning has taken place, is also presented. Results show that the proposed new algorithm ranks as the best among the non-optimized algorithms with prediction results very close to the optimization methods of Part 2.
eng_Latn
787
FindMinimum, NMinimize, etc. with external process
What are the most common pitfalls awaiting new users?
Matrix doesn't shrink when put in fraction.
eng_Latn
788
Global Maximizer for log-likelihood
Negative logLikelihood Kalman filter
Global auth is dead! Long live universal login
kor_Hang
789
Fully Parallel Stochastic LDPC Decoders
Factor Graphs and the Sum Product Algorithm
Foundation review : The future of antibodies as cancer drugs
eng_Latn
790
Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees
Algorithms for Non-negative Matrix Factorization
Preservation and collection of biological evidence.
eng_Latn
791
Greed is good: algorithmic results for sparse approximation
A generalized uncertainty principle and sparse representation in pairs of bases
Entropy-Based Algorithms for Best Basis Selection
eng_Latn
792
Fast Eigenspace Approximation using Random Signals
Visualizing Large-scale and High-dimensional Data
Long-term potentiation and memory.
eng_Latn
793
Robust PCA via Nonconvex Rank Approximation
Robust principal component analysis?
The essence of three-phase AC/AC converter systems
kor_Hang
794
Global Convergence of ADMM in Nonconvex Nonsmooth Optimization
A dual algorithm for the solution of nonlinear variational problems via finite element approximation
Model-Based Regression Testing: Process, Challenges and Approaches
eng_Latn
795
A Unified Alternating Direction Method of Multipliers by Majorization Minimization
A dual algorithm for the solution of nonlinear variational problems via finite element approximation
Axial Plane Optical Microscopy
eng_Latn
796
online alternating direction method .
Model selection and estimation in regression with grouped variables
A new alternating minimization algorithm for total variation image reconstruction
eng_Latn
797
A Blind Source Separation Technique Using Second-Order Statistics
Matrix analysis
Leveraging the Exact Likelihood of Deep Latent Variable Models
kor_Hang
798
Enhanced Low-Rank Matrix Approximation
proximal splitting methods in signal processing ∗ .
Stiff person syndrome , startle and other immune-mediated movement disorders – new insights
eng_Latn
799