query
stringlengths
1
13.4k
pos
stringlengths
1
61k
neg
stringlengths
1
63.9k
query_lang
stringclasses
147 values
__index_level_0__
int64
0
3.11M
Feature Selection by Joint Graph Sparse Coding
Online Learning for Matrix Factorization and Sparse Coding
A Randomized Clinical Trial of Eye Movement Desensitization and Reprocessing (EMDR), Fluoxetine, and Pill Placebo in the Treatment of Posttraumatic Stress Disorder: Treatment Effects and Long-Term Maintenance
eng_Latn
1,000
Robust semi-supervised nonnegative matrix factorization
Locality Preserving Projections
Targeting adenosine for cancer immunotherapy
eng_Latn
1,001
A Comparative Framework for Preconditioned Lasso Algorithms
The Adaptive Lasso and Its Oracle Properties
Tracking curved regularized optimization solution paths
eng_Latn
1,002
Nonlinear Extensions of Reconstruction ICA
On optimization methods for deep learning
Pegasos: primal estimated sub-gradient solver for SVM
eng_Latn
1,003
Manifold Regularized Discriminative Nonnegative Matrix Factorization With Fast Gradient Descent
Parameterisation of a stochastic model for human face identification
Functional foods against metabolic syndrome (obesity, diabetes, hypertension and dyslipidemia) and cardiovasular disease
eng_Latn
1,004
When is a Convolutional Filter Easy To Learn?
How to Escape Saddle Points Efficiently
Global Optimality of Local Search for Low Rank Matrix Recovery
eng_Latn
1,005
Phase Transitions for High Dimensional Clustering and Related Problems
A direct formulation for sparse PCA using semidefinite programming
Expokit: a software package for computing matrix exponentials
eng_Latn
1,006
Understanding Alternating Minimization for Matrix Completion
A Simple Algorithm for Nuclear Norm Regularized Problems
Expertise modeling for matching papers with reviewers
eng_Latn
1,007
Stable and Efficient Representation Learning with Nonnegativity Constraints
Non-negative matrix factorization with sparseness constraints
Unifying nearest neighbors collaborative filtering
eng_Latn
1,008
An Asynchronous Parallel Stochastic Coordinate Descent Algorithm
Regression Shrinkage and Selection Via the Lasso
Adaptable Game Experience Based on Player's Performance and EEG
eng_Latn
1,009
Large-Scale Online Feature Selection for Ultra-High Dimensional Sparse Data
On Similarity Preserving Feature Selection
Teaching Creativity and Inventive Problem Solving in Science
yue_Hant
1,010
Iterative thresholding based image segmentation using 2D improved Otsu algorithm
Otsu Method and K-means
Matrix Tri-Factorization with Manifold Regularizations for Zero-Shot Learning
eng_Latn
1,011
Interpreting Latent Variables in Factor Models via Convex Optimization
High-dimensional covariance estimation by minimizing $\ell_1$-penalized log-determinant divergence
Art Therapy and Mindfulness With Survivors of Political Violence: A Qualitative Study
eng_Latn
1,012
Robust PCA via Nonconvex Rank Approximation
Bayesian Robust Principal Component Analysis
Data Driven Analysis on the Effect of Online Judge System
kor_Hang
1,013
Algorithms and applications for approximate nonnegative matrix factorization
Algorithms, Initializations, and Convergence for the Nonnegative Matrix Factorization
Synaptic integrative mechanisms for spatial cognition
eng_Latn
1,014
Sublinear Time Low-Rank Approximation of Positive Semidefinite Matrices
SPSD Matrix Approximation vis Column Selection: Theories, Algorithms, and Extensions
Scene Classification With Recurrent Attention of VHR Remote Sensing Images
eng_Latn
1,015
Optimizing the performance of sparse matrix-vector multiplication
Concept Decompositions for Large Sparse Text Data Using Clustering
An Interactive Artificial Ant Approach to Non-photorealistic Rendering
eng_Latn
1,016
F-SVM: Combination of Feature Transformation and SVM Learning via Convex Relaxation
Discriminative decorrelation for clustering and classification
Statistical learning theory
eng_Latn
1,017
Optimal Statistical and Computational Rates for One Bit Matrix Completion.
Factorization meets the neighborhood: a multifaceted collaborative filtering model
Code3: A System for End-to-End Programming of Mobile Manipulator Robots for Novices and Experts
eng_Latn
1,018
Sparse nonnegative matrix approximation : new formulations and algorithms
Regression Shrinkage and Selection Via the Lasso
Nonnegative Matrix Factorization for Spectral Data Analysis
eng_Latn
1,019
Kernel Multivariate Analysis Framework for Supervised Subspace Learning: A Tutorial on Linear and Kernel Multivariate Methods
Nonlinear Component Analysis as a Kernel Eigenvalue Problem
Estimation of primary quantization matrix in double compressed jpeg images
eng_Latn
1,020
Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula
A direct formulation for sparse PCA using semidefinite programming
The role of positive and negative emotions in life-satisfaction judgment across nations
eng_Latn
1,021
An Introduction to Dimensionality Reduction Using Matlab
A Global Geometric Framework for Nonlinear Dimensionality Reduction
Organizational Ambidexterity: Past, Present and Future
yue_Hant
1,022
Depthwith nonlinearity creates no bad localminima in ResNets
Identifying and attacking the saddle point problem in high-dimensional non-convex optimization
Authorship analysis: Identifying the author of a program
eng_Latn
1,023
A pilot study of the Video Observations Aarts and Aarts (VOAA): a new software program to measure motor behaviour in children with cerebral palsy
A new computer software program to score video observations, Video Observations Aarts and Aarts (VOAA) was developed to evaluate paediatric occupa- tional therapy interventions. The VOAA is an observation tool that assesses the fre- quency, duration and quality of arm/hand use in children, in particular those with cerebral palsy. Reliability studies show that the fi rst module, designed to evaluate a forced-use programme, has an excellent content validity index (0.93) and good intra- and inter-observer reliability (Cohen's kappas ranging from 0.62 to 0.85 for the three activi- ties tested). With the built-in statistical package, paediatric occupational therapy depart- ments can conduct therapeutic evaluations with children with impairments in the upper extremities. Further research is recommended to apply the VOAA in clinical studies in paediatric occupational therapy. Copyright © 2007 John Wiley & Sons, Ltd.
ABSTRACTThis paper deals with the functional relation between multivariate methods of canonical correlation analysis (CCA), partial least squares (PLS) and also their kernelized versions. Both methods are determined by the solution of the respective optimization problem, and result in algorithms using spectral or singular decomposition theories. The solution of the parameterized optimization problem, where the boundary points of a parameter give exactly the results of CCA (resp. PLS) method leads to the vector functions (paths) of eigenvalues and eigenvectors or singular values and singular vectors. Specifically, in this paper, the functional relation means the description of classes into which the given paths belong. It is shown that if input data are analytical (resp. smooth) functions of a parameter, then the vector functions are also analytical (resp. smooth). Those approaches are studied on three practical examples of European tourism data.
eng_Latn
1,024
How do I know the best model from lasso regression fitting/plot?
How can I choose the best model from LARS and LASSO regression?
Why is statistics calculated from raw data more accurate than statistic calculated from a frequency table?
eng_Latn
1,025
I am working on a simple example of how to numerically solve the time-independent Schrodinger Equation for the infinite square well. I've used the Euler Method to find values of the wave function, $\psi (x)$, but now I've just realized something - I have no clue how to determine the energies from this! I know that analytically this comes down to applying the Hamiltonian to the wavefunction, but how do I find the energies numerically?? I can provide more info if needed.
This is my first question on here. I'm trying to numerically solve the Schrödinger equation for the and find the energy eigenvalues and eigenfunctions but I am confused about how exactly this should be done. I've solved some initial value problems in the past using iterative methods such as Runge–Kutta. I've read that is the way to solve Schrödinger's equation but Wikipedia also describes it as an iterative method for initial value problems. How do I use it to solve an eigenvalue problem? This confuses me for the following reasons: Wouldn't iteratively solving the DE require knowledge of the energy eigenvalues to use as input to the calculation? I don't know the eigenvalues yet; they're precisely what I'm trying to calculate. If I did that, wouldn't I simply get a unique solution, instead of a family of eigenfunctions and eigenvalues? I've seen some mention of "tridiagonal matrices" being generated somehow, but am not sure what the elements of that matrix would be or how that applies to the problem. Leandro M. mentioned that "the discretization defines a finite dimensional (matrix) eigenvalue problem". This seems like the correct road I should be going down, but I haven't been able to find anything that explicitly explains this process or how the matrix is constructed. If this is the correct procedure, how is such a matrix constructed?
Please use UK pre-uni methods only (at least at first). Thank you.
eng_Latn
1,026
Let M be mxn matrix then SVD of M will be UXW^* (sorry for X, assume summation). Then how does it generalizes eigen decomposition ? Since eigen decomposition is possible for nxn matrix and that are non-symmetric.
Principal component analysis (PCA) is usually explained via an eigen-decomposition of the covariance matrix. However, it can also be performed via singular value decomposition (SVD) of the data matrix $\mathbf X$. How does it work? What is the connection between these two approaches? What is the relationship between SVD and PCA? Or in other words, how to use SVD of the data matrix to perform dimensionality reduction?
This question is about an efficient way to compute principal components. Many texts on linear PCA advocate using singular-value decomposition of the casewise data. That is, if we have data $\bf X$ and want to replace the variables (its columns) by principal components, we do SVD: $\bf X=USV'$, singular values (sq. roots of the eigenvalues) occupying the main diagonal of $\bf S$, right eigenvectors $\bf V$ are the orthogonal rotation matrix of axes-variables into axes-components, left eigenvectors $\bf U$ are like $\bf V$, only for cases. We can then compute component values as $ \bf C=XV=US$. Another way to do PCA of variables is via decomposition of $\bf R=X'X$ square matrix (i.e. $\bf R$ correlations or covariances etc., between the variables ). The decomposition may be eigen-decomposition or singular-value decomposition: with square symmetric positive semidefinite matrix, they will give the same result $\bf R=VLV'$ with eigenvalues as the diagonal of $\bf L$, and $\bf V$ as described earlier. Component values will be $\bf C=XV$. Now, my question: if data $\bf X$ is a big matrix, and number of cases is (which is often a case) much greater than the number of variables, then way (1) is expected to be much slower than way (2), because way (1) applies a quite expensive algorithm (such as SVD) to a big matrix; it computes and stores huge matrix $\bf U$ which we really doesn't need in our case (the PCA of variables). If so, then why so many texbooks seem to advocate or just mention only way (1)? Maybe it is efficient and I'm missing something?
eng_Latn
1,027
I'm doing a data analysis on data with more than 100 dimensions. After that different ML-Algorithms like NN are applied to it. When I do a PCA in the first place to reduce dimensionality to somewhat like 3-10, I persistently get better results (as in less miss-predictions) than without it. My thought was that PCA should just speed up NN, etc, but not make them better? Is this improvement realistic or did I make a mistake with my PCA? This is how I´m doing it concretely: Data; % training input Test_Data; % test input pca_size = 3; % pca size %Scaling and Centering of Data Scaled = (Data - mean(Data))./std(Data); coeff = pca(Scaled); Data_Reduced = Data * coeff(:, 1:pca_size); Test_Data_Reduced = Test_Data * coeff(:, 1:pca_size);
Suppose I am running a regression $Y \sim X$. Why by selecting top $k$ principle components of $X$, does the model retain its predictive power on $Y$? I understand that from dimensionality-reduction/feature-selection point of view, if $v_1, v_2, ... v_k$ are the eigenvectors of covariance matrix of $X$ with top $k$ eigenvalues, then $Xv_1, Xv_2 ... Xv_k$ are top $k$ principal components with maximum variances. We can thereby reduce the number of features to $k$ and retain most of the predictive power, as I understand it. But why do top $k$ components retain the predictive power on $Y$? If we talk about a general OLS $Y \sim Z$, there is no reason to suggest that if feature $Z_i$ has maximum variance, then $Z_i$ has the most predictive power on $Y$. Update after seeing comments: I guess I have seen tons of examples of using PCA for dimensionality reduction. I have been assuming that means the dimensions we are left with have the most predictive power. Otherwise what's the point of dimensionality reduction?
I'm trying to install in VirtualBox but the installation get stuck at Do you want to install boot loader GRUB Some background: The VM was created from the Ubuntu 32-bit (x86) VirtualBox template The VM has 1 core + 3096 MB RAM Video memory: 32 MB PAE/NX enabled Hardware virtualization: both VTx and nested pagination enabled Graphics acceleration: disabled (both 2D and 3D) Storage: 64 GB attached to IDE virtual controller (was SATA before: makes no difference) I've already tried all the "standard" VirtualBox procedure (PAE yes/no; SATA/IDE; no soundcard; no USB; graphics accel. yes/no). I also tried run the Live CD in VESA mode: like this it starts, but I'd still prefer to install it.
eng_Latn
1,028
I have a dataset with approximately 4000 rows and 150 columns. I want to predict the values of a single column (= target). The data is on cities (demography, social, economic, ... indicators). A lot of these are highly correlated, so I want to do a PCA - Principal Component Analysis. The problem is, that ~40% of the values are missing. My current approach is: Remove target indicator and do PCA with mean/median imputation of missing values. Select x principal components (PC). Append target indicator to these PC. Use PC as predictors for the target variable and try common regression techniques, e.g. knn, linear regression, random forest etc. With this approach, I'm getting quite good results. My metric is RMSE% - root mean squared relative prediction error. I tried this for all columns in the dataset, the RMSE% is between 0.5% and 8% (depending on the column). These errors are for values I actually know, NOT imputed values. So, here's my problem: I'm not sure how much my data is distorted by replacing the missing values with the column mean/median. Is there any other way of imputing the missing values with minimal effect on the PCA results?
I used the prcomp() function to perform a PCA (principal component analysis) in R. However, there's a bug in that function such that the na.action parameter does not work. ; two users there offered two different ways of dealing with NA values. However, the problem with both solutions is that when there is an NA value, that row is dropped and not considered in the PCA analysis. My real data set is a matrix of 100 x 100 and I do not want to lose a whole row just because it contains a single NA value. The following example shows that the prcomp() function does not return any principal components for row 5 as it contains a NA value. d <- data.frame(V1 = sample(1:100, 10), V2 = sample(1:100, 10), V3 = sample(1:100, 10)) result <- prcomp(d, center = TRUE, scale = TRUE, na.action = na.omit) result$x # $ d$V1[5] <- NA # $ result <- prcomp(~V1+V2, data=d, center = TRUE, scale = TRUE, na.action = na.omit) result$x I was wondering if I can set the NA values to a specific numerical value when center and scale are set to TRUE so that the prcomp() function works and does not remove rows containing NA's, but also does not influence the outcome of the PCA analysis. I thought about replacing NA values with the median value across a single column, or with a value very close to 0. However, I am not sure how that influences the PCA analysis. Can anybody think of a good way of solving that problem?
Anyone on the Android Jelly Bean OS and not able to buy Apps/Books through wallet? I have just bought the Samsung GalaxyNote 800 and am facing this issue. The error is (RPC:S-7:AEC-0). What is the solution?
eng_Latn
1,029
First of all, I would like to note that I have read similar topics in CrossValidated but I am not fully satisfied. I have a dataset which consists of an $N\times M$ binary matrix. 1 means that an action is performed and 0 that it is not. I apply PCA to the dataset and surprisingly get very good results, especially when I reduce it to only two dimensions. I am looking for the intuition behind performing PCA on such a dataset (i.e. where each attribute contains categorical data; you can give whatever example you think is more understandable) and whether a more appropriate technique can be applied. I am working with MATLAB and I need the data in a clustering friendly form.
In today's pattern recognition class my professor talked about PCA, eigenvectors and eigenvalues. I understood the mathematics of it. If I'm asked to find eigenvalues etc. I'll do it correctly like a machine. But I didn't understand it. I didn't get the purpose of it. I didn't get the feel of it. I strongly believe in the following quote: You do not really understand something unless you can explain it to your grandmother. -- Albert Einstein Well, I can't explain these concepts to a layman or grandma. Why PCA, eigenvectors & eigenvalues? What was the need for these concepts? How would you explain these to a layman?
I have a dataset that has both continuous and categorical data. I am analyzing by using PCA and am wondering if it is fine to include the categorical variables as a part of the analysis. My understanding is that PCA can only be applied to continuous variables. Is that correct? If it cannot be used for categorical data, what alternatives exist for their analysis?
eng_Latn
1,030
I have a data matrix $X$ with shape $p\times n$. It might not matter but I interpret $X$ is $n$ vectors each containing $p$ features. Then I compute $Q = X X^{T} / n$. This implies that $Q$ is positive definite. I interpret $Q$ as covariance matrix of data which are columns of $X$. (Normally mean should be subtracted from $X$ before computing covariance this way. However the mean of $X$ is 0 in my case. That is why the formula for covariance is simpler in my case.) Then I compute $Q^{-1}$, the inverse of $Q$. Which should be also positive definite. I want to compute Mahalanobis distance for each column $x$ of $X$ as follows: $\sqrt{x^T Q^{-1} x}$ However for some columns I get that $x^T Q^{-1} x$ is negative. This should not be possible in theory. So I suspect that the error is caused by numerical errors caused by software. I use the Math.Net numerics library in C#. How do I avoid this problem? I tried replacing $Q$ and $Q^{-1}$ by their transpose. The result should be the same in theory but slightly differs in practice. But the difference wasn't big enough I still get negative number for the same column. What else I can do?
I have an expression for a covariance matrix $C$ in terms of the indices $i$ and $j$. In this way I can analytically calculate the elements of my covariance matrix, however when I try to invert $C$ matlab gives a warning about the matrix being close to singular. The inversion therefore doesn't work, by which I mean the multiplying $C$ by the inverse given i do not get the identity. I have tried calculating the pseudo inverse but this also does not work. I have also tried adding a small constant along the diagonal but again the results do not work. In general I'm working with matrices of dimension 1200 but I will give low dimensional example matrix that has the same properties, i.e. the matrices are symmetric about the diagonal and the anti-diagonal 19.9939 19.9954 19.9958 19.9951 19.9933 19.9905 19.9865 19.9954 19.9973 19.9981 19.9978 19.9965 19.9940 19.9905 19.9958 19.9981 19.9993 19.9995 19.9985 19.9965 19.9933 19.9951 19.9978 19.9995 20.0000 19.9995 19.9978 19.9951 19.9933 19.9965 19.9985 19.9995 19.9993 19.9981 19.9958 19.9905 19.9940 19.9965 19.9978 19.9981 19.9973 19.9954 19.9865 19.9905 19.9933 19.9951 19.9958 19.9954 19.9939 As mentioned in the title the matrix isn't positive, however the the negative eigenvalues are very small suggesting that the matrix is not positive only due to machine precision. The negative eigenvalues are $-1.4048e-14$ and $-2.4571e-15$. How can I go about inverting these matrices?
The entire site is blank right now. The header and footer are shown, but no questions.
eng_Latn
1,031
I'm learning about the Statistical learning and in the section comparing Lasso and Ridge Regression it shows that the main difference between these two problems is the way the constraint/penalty is formulated. In Lasso, the penalty is $\ell_1$ norm: $\lambda \sum |\beta_j|$, while in regression, the penalty is $\ell_2$: $\lambda \sum \beta_j^2$. Geometrically, this means that the lasso will have a constraint in the form of a diamond (in 2 dimensions), and in higher dimensions it will have vertices and edges. For ridge regression, in 2D, it is a circle, and hypersphere in higher dimensions. My question is: The author claims that you get SPARSITY in the lasso. I do not understand why, even with the geometric picture above. And what is the clear advantage of Lasso over ridge regression? Your insights would be very valuable. I appreciate if your answer would contain some mathematics, but more importantly, intuition. Thanks
I've been reading , and I would like to know why the Lasso provides variable selection and ridge regression doesn't. Both methods minimize the residual sum of squares and have a constraint on the possible values of the parameters $\beta$. For the Lasso, the constraint is $||\beta||_1 \le t$, whereas for ridge it is $||\beta||_2 \le t$, for some $t$. I've seen the diamond vs ellipse picture in the book and I have some intuition as for why the Lasso can hit the corners of the constrained region, which implies that one of the coefficients is set to zero. However, my intuition is rather weak, and I'm not convinced. It should be easy to see, but I don't know why this is true. So I guess I'm looking for a mathematical justification, or an intuitive explanation of why the contours of the residual sum of squares are likely to hit the corners of the $||\beta||_1$ constrained region (whereas this situation is unlikely if the constraint is $||\beta||_2$).
I recently ran across elliptic curve crypto-systems: (Brown University) (Wikipedia) (IEEE) (RSA.com) It seemed to me to be great alternative to RSA as the de-facto cryptosystems to be used in banking and financial systems and in the public key infrastructure for certificates, but is not used! If someone can explain why this is not done, it would be very helpful. A comparison between traditional RSA and an elliptic curve cryptology would be helpful. To begin with: Advantage of RSA: Well established. Advantages of elliptic curve: Shorter keys are as strong as long key for RSA (see the IEEE paper) Low on CPU consumption. Low on memory usage.
eng_Latn
1,032
I see that in PCA the first principal component maximizes the variances amongst all the points within the data set. What exactly does this mean, what does it show and what does every other principal component thereafter tell me? I read these really nice simple explanation on PCA that helped me understand PCA as a whole: To decide how many eigenvalues/eigenvectors to keep, you should consider your reason for doing PCA in the first place. Are you doing it for reducing storage requirements, to reduce dimensionality for a classification algorithm, or for some other reason? If you don't have any strict constraints, I recommend plotting the cumulative sum of eigenvalues (assuming they are in descending order). If you divide each value by the total sum of eigenvalues prior to plotting, then your plot will show the fraction of total variance retained vs. number of eigenvalues. The plot will then provide a good indication of when you hit the point of diminishing returns (i.e., little variance is gained by retaining additional eigenvalues). However, it's still unclear to me as to really what each PC means and what PC2 now represents and PC3, etc.
In today's pattern recognition class my professor talked about PCA, eigenvectors and eigenvalues. I understood the mathematics of it. If I'm asked to find eigenvalues etc. I'll do it correctly like a machine. But I didn't understand it. I didn't get the purpose of it. I didn't get the feel of it. I strongly believe in the following quote: You do not really understand something unless you can explain it to your grandmother. -- Albert Einstein Well, I can't explain these concepts to a layman or grandma. Why PCA, eigenvectors & eigenvalues? What was the need for these concepts? How would you explain these to a layman?
The new Top-Bar does not show reputation changes from Area 51.
eng_Latn
1,033
I want to have two matrixes of $ A $ and $B$ of a certain size, multiply them together $C = AB$, do some operations on $C$ to make $C_2$ and then decompose the matrix into to matrices of the same size as $A$ and $B$ so that $A_2B_2 = C_2$. Which decomposition function can I use to achieve this? EDIT: This is not a duplicate because the thread linked in the comments using LU decomposition says nothing about L and U being the same dimensions as the original matrix
Wikipedia says "In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems." But in my opinion decomposition term should be used to represent breaking a matrix in different sub-matrices or some new matrices created after some operation on original matrix which if used together and passed through some algorithm(not necessarily product), shall reproduce the original matrix. Is there some different terminology to represent what I am expecting to say?
Please use UK pre-uni methods only (at least at first). Thank you.
eng_Latn
1,034
From a very general point of view, when you have a dataset $X$ and want to predict a label $y$, what is the purpose of beginning with a PCA (principal component analysis) first, and then doing the prediction itself (with logistic regression, or random forest or whatever) from both intuitive and theoretical reason? In which case can this improve the quality of prediction?
Suppose I am running a regression $Y \sim X$. Why by selecting top $k$ principle components of $X$, does the model retain its predictive power on $Y$? I understand that from dimensionality-reduction/feature-selection point of view, if $v_1, v_2, ... v_k$ are the eigenvectors of covariance matrix of $X$ with top $k$ eigenvalues, then $Xv_1, Xv_2 ... Xv_k$ are top $k$ principal components with maximum variances. We can thereby reduce the number of features to $k$ and retain most of the predictive power, as I understand it. But why do top $k$ components retain the predictive power on $Y$? If we talk about a general OLS $Y \sim Z$, there is no reason to suggest that if feature $Z_i$ has maximum variance, then $Z_i$ has the most predictive power on $Y$. Update after seeing comments: I guess I have seen tons of examples of using PCA for dimensionality reduction. I have been assuming that means the dimensions we are left with have the most predictive power. Otherwise what's the point of dimensionality reduction?
I have a dataset for which I have multiple sets of binary labels. For each set of labels, I train a classifier, evaluating it by cross-validation. I want to reduce dimensionality using principal component analysis (PCA). My question is: Is it possible to do the PCA once for the whole dataset and then use the new dataset of lower dimensionality for cross-validation as described above? Or do I need to do a separate PCA for every training set (which would mean doing a separate PCA for every classifier and for every cross-validation fold)? On one hand, the PCA does not make any use of the labels. On the other hand, it does use the test data to do the transformation, so I am afraid it could bias the results. I should mention that in addition to saving me some work, doing the PCA once on the whole dataset would allow me to visualize the dataset for all label sets at once. If I have a different PCA for each label set, I would need to visualize each label set separately.
eng_Latn
1,035
I'm working on a ranking problem where I want to measure the distance between a collection of query points (as a group) and each target point in my database. Each query point is part of the set of target points. I started with the Euclidean distance and cosine similarity by using the mean vector of the query points. However, the results were not satisfactory possibly because both of these measures don't take into account the variance and covariance of the query points. I stumbled across the which seems to be exactly what I want to try: Measuring how many standard deviations a point is away from the mean of the subsample. The problem is that I work in a 256-dimensional space while the number of query points is usually much lower than that. Therefore, I cannot calculate the Mahalanobis distance which relies on the covariance matrix. In order to calculate it, I need more observations n than dimensions p. 1) Is this assumption correct? The past days I've been thinking about how I could still do "better" than the Euclidean distance without having enough data to calculate the Mahalanobis distance. I thought about switching out the covariance matrix for the diagonal matrix of variances. As long as none of the variances is zero an inverse can be calculated. 2) Does this idea have a flaw? Since the 256-dimensional vectors are the result of PCA over all target points, I could use n points to create a covariance matrix over the first n-1 dimensions and then fill the rest of the needed 256-dim covariance matrix with the diagonal of variances from before. 3) Would that make any sense? 4) Any other ideas that might help me to measure a more meaningful distance between a point and a subsample of points? Please note: A very similar question has been asked but I believe my problem is still different. There, OP wanted to measure pairwise distances between points given all points. I want to measure the distance between the distribution of a subsample to all other points. I believe, the reasoning of the accepted answer does not apply to my problem since the distribution of the subsample is likely to be different.
I have an issue which I could not solve, although I tried and I got some help on R forum. I am trying to calculate Mahalanobis distances on a data.frame, where I have several hundreds of groups and several hundreds of variables. Whatever I do, I get the system is computationally singular: reciprocal condition number error. It is clear that it is singular, but is there any way to get rid of it and run Mahalanobis? Should I forget solve this using another approach? If yes, then what else to use? I have uploaded the data file to my : It is a tab delimited txt file with no headers. I was working with the R StatMatch Mahalanobis (also tried stats Mahalanobis) function. I have a deadline for this project (not a homework!), and I could always use this function, so I thought I will be able to keep the calculations short, but now I am lost. Migrated .
The new Top-Bar does not show reputation changes from Area 51.
eng_Latn
1,036
To better understand SVD, I'm trying to recreate the values for U, S, and V using straight numpy, but I can't get the same results. According to numpy's for its implementation of SVD, it returns values for U, S, and V such that you can recreate your original dataset in the following way: (U*S) @ V = original_data My understanding is that the values for U are derived from the eigenvectors of $ XX^T $, V from the eigenvectors of $X^TX$, and they share the same eigenvalues, and their square root comprise the values for S. So, to recreate these on my own I'm doing the following: from sklearn.datasets import load_boston boston = load_boston() X = boston.data u_vals, u_vecs = np.linalg.eig(X.dot(X.T)) v_vals, v_vecs = np.linalg.eig(X.T.dot(X)) My thinking is that u_vecs[:, :13] @ np.diag(v_vals**.5) @ v_vecs.T ought to do the trick. However, this gives very different results. Even more, I've checked further, and the only difference between the values I have from numpy's eigensolver and the ones returned from its svd method are the signs involved in the eigenvectors used. However, my (perhaps naive) understanding is that this shouldn't make a difference. However, I'm clearly mistaken in something. Is it simply a case of using a different eigensolver, or is there a deeper misunderstanding that I'm not seeing?
I am trying to do SVD by hand: m<-matrix(c(1,0,1,2,1,1,1,0,0),byrow=TRUE,nrow=3) U=eigen(m%*%t(m))$vector V=eigen(t(m)%*%m)$vector D=sqrt(diag(eigen(m%*%t(m))$values)) U1=svd(m)$u V1=svd(m)$v D1=diag(svd(m)$d) U1%*%D1%*%t(V1) U%*%D%*%t(V) But the last line does not return m back. Why? It seems to has something to do with signs of these eigenvectors... Or did I misunderstand the procedure?
Please use UK pre-uni methods only (at least at first). Thank you.
eng_Latn
1,037
Say I have 300 samples from a population containing two groups, A and B, and data for several variables. I have 150 from Group A and 150 from Group B. However, I know that Group A makes up roughly 20% of the population and group B makes up 80% and the two groups differ on the variables in question. Is there a way to weight the PCA by cases to make it more representative of the population? Would it be enough to just to do a weighted standardization?
After some searching, I find very little on the incorporation of observation weights/measurement errors into principal components analysis. What I do find tends to rely on iterative approaches to include weightings (e.g., ). My question is why is this approach necessary? Why can't we use the eigenvectors of the weighted covariance matrix?
The new Top-Bar does not show reputation changes from Area 51.
eng_Latn
1,038
I have already posted my problem in stackoverflow, I am not sure if this might be problematic, and I am not sure, if the post is shown in both communities. If so I will delete. I am trying to apply principal component analysis, to reduce the dimensions of my data. 200x146 , 200 observations(samples) with 146 features(dimensions), each observation can belong to one of three classes. What I am trying to do is to visualize the data, to see how the class centroids move after adding new samples to my data. Since it’s impossible to plot such high dimensional data, I am looking for a dimension that would represent my data in almost separate classclusters. I know that PCA, calculates the eigenvalues of the eigenvectors, while the eigenvalues represent the variance. The higher the variance the more the data is spread out and better to visualize. The eigenvector with the highest eigenvalue is the principal component, an axis orthogonal to this component is then found by the PCA. (Did I understand the basic idea of PCA correctly?) However I don’t understand, what information I do get when I use the matlab function pca() I get the coefficient, but what do they tell me and how to I proceed afterward?
In today's pattern recognition class my professor talked about PCA, eigenvectors and eigenvalues. I understood the mathematics of it. If I'm asked to find eigenvalues etc. I'll do it correctly like a machine. But I didn't understand it. I didn't get the purpose of it. I didn't get the feel of it. I strongly believe in the following quote: You do not really understand something unless you can explain it to your grandmother. -- Albert Einstein Well, I can't explain these concepts to a layman or grandma. Why PCA, eigenvectors & eigenvalues? What was the need for these concepts? How would you explain these to a layman?
I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,039
Am I right that "spectral decomposition" for symmetric matrix and "singular value decomposition" for non square matrix? Any clarification would be appreciated.
I have understood how ridge regression shrinks coefficients towards zero geometrically. Moreover I know how to prove that in the special "Orthonormal Case," but I am confused how that works in the general case via "Spectral decomposition."
This question is about an efficient way to compute principal components. Many texts on linear PCA advocate using singular-value decomposition of the casewise data. That is, if we have data $\bf X$ and want to replace the variables (its columns) by principal components, we do SVD: $\bf X=USV'$, singular values (sq. roots of the eigenvalues) occupying the main diagonal of $\bf S$, right eigenvectors $\bf V$ are the orthogonal rotation matrix of axes-variables into axes-components, left eigenvectors $\bf U$ are like $\bf V$, only for cases. We can then compute component values as $ \bf C=XV=US$. Another way to do PCA of variables is via decomposition of $\bf R=X'X$ square matrix (i.e. $\bf R$ correlations or covariances etc., between the variables ). The decomposition may be eigen-decomposition or singular-value decomposition: with square symmetric positive semidefinite matrix, they will give the same result $\bf R=VLV'$ with eigenvalues as the diagonal of $\bf L$, and $\bf V$ as described earlier. Component values will be $\bf C=XV$. Now, my question: if data $\bf X$ is a big matrix, and number of cases is (which is often a case) much greater than the number of variables, then way (1) is expected to be much slower than way (2), because way (1) applies a quite expensive algorithm (such as SVD) to a big matrix; it computes and stores huge matrix $\bf U$ which we really doesn't need in our case (the PCA of variables). If so, then why so many texbooks seem to advocate or just mention only way (1)? Maybe it is efficient and I'm missing something?
eng_Latn
1,040
I know that Principal Component Analysis (PCA) is the eigenvector of the covariance matrix. It is used as a tool for dimensional reduction. What I am confused about is whether the PCA give weights to original features in order to find out which features explain the data the most or does it come up with new set of abstract features that explain the greatest variance in the data set.
In today's pattern recognition class my professor talked about PCA, eigenvectors and eigenvalues. I understood the mathematics of it. If I'm asked to find eigenvalues etc. I'll do it correctly like a machine. But I didn't understand it. I didn't get the purpose of it. I didn't get the feel of it. I strongly believe in the following quote: You do not really understand something unless you can explain it to your grandmother. -- Albert Einstein Well, I can't explain these concepts to a layman or grandma. Why PCA, eigenvectors & eigenvalues? What was the need for these concepts? How would you explain these to a layman?
What are the main differences between performing principal component analysis (PCA) on the correlation matrix and on the covariance matrix? Do they give the same results?
eng_Latn
1,041
I believe I have a problem understanding PCA: I would like to use this technique to reduce the number of features of my problem. I originally have 10,000 features and 500 samples. However, the use of PCA will limit my number of principal components to the smallest between the number of samples (columns of my data matrix) and the number of features (rows of this matrix). 100% of variance could therefore be explained by 500 components. But 500 components is far smaller than 10,000 features... How can all the variance be explained by less than the number of samples (which has nothing to do with the number of features)?
In PCA, when the number of dimensions $d$ is greater than (or even equal to) the number of samples $N$, why is it that you will have at most $N-1$ non-zero eigenvectors? In other words, the rank of the covariance matrix amongst the $d\ge N$ dimensions is $N-1$. Example: Your samples are vectorized images, which are of dimension $d = 640\times480 = 307\,200$, but you only have $N=10$ images.
I want to make it so I always have 50 entities in an area. So when 1 dies/despawns/leaves the area I want it to detect that and summon in a new entity, and yes I do want them to die/despawn/leave the area so preventing those things is not an option.
eng_Latn
1,042
I would like to execute multidimensional scaling () based on a matrix of Pearson correlation coefficients. The function takes a dissimilarity matrix as an input and I therefore need to convert my similarity matrix into a dissimilarity/distance matrix. What is the correct way of doing so? According to answer, a similarity matrix (S) can be converted to an euclidean distance matrix (d) via: d = sqrt(2(1-S)) It also notes that on some occasions, the factor 2 can be omitted: d = sqrt(1-S) According to paper (eq. 3.4) and paper (eq. 3), the conversion should be done as follows: d = 1-S Which of those answers is correct, given that my similarity matrix has values between [-1,1] and the diagonal elements are 1 (it is symmetric)?
In Random forest algorithm, Breiman (author) constructs similarity matrix as follows: Send all learning examples down each tree in the forest If two examples land in the same leaf increment corresponding element in similarity matrix by 1 Normalize the matrix with number of trees He says: The proximities between cases n and k form a matrix {prox(n,k)}. From their definition, it is easy to show that this matrix is symmetric, positive definite and bounded above by 1, with the diagonal elements equal to 1. It follows that the values 1-prox(n,k) are squared distances in a Euclidean space of dimension not greater than the number of cases. In his implementation, he uses sqrt(1-prox), where prox is a similarity matrix, to convert it to distance matrix. I guess it has something to do with the "sqaured distances in a Euclidean space"-quoted above. Can somebody shine a little light on why it follows that 1-prox are squared distances in a Euclidean space and why he uses squared root to get distance matrix?
$M$ = mass of the Sun $m$ = mass of the Earth $r$ = distance between the Earth and the Sun The sun is converting mass into energy by nuclear fusion. $$F = \frac{GMm}{r^2} = \frac{mv^2}{r} \rightarrow r = \frac{GM}{v^2}$$ $$\Delta E = \Delta M c^2 = (M_{t} - M_{t+\Delta t}) c^2 \rightarrow \Delta M = \Delta E / c^2$$ $$\rightarrow \frac{\Delta r}{\Delta t} = \frac{G}{v^2 c^2}.\frac{\Delta E}{\Delta t}$$ Sun radiates $3.9 × 10^{26}~\mathrm W = \Delta E/\Delta t$ Velocity of the earth $v = 29.8 \mathrm{km/s}$ There is nothing that is stopping the earth from moving with the same velocity so for centripetal force to balance gravitational force $r$ must change. Is $r$ increasing? ($\Delta r/ \Delta t = 3.26070717 × 10^{-10} \mathrm{m/s} $)
eng_Latn
1,043
Prove the orthogonal complement of the row space of $A $is ${0}$ implies $Ax = 0$ has only the trivial solution
I am trying to understand how to find the orthogonal complement of a subspace $M$ of a vector space $V$. From my understanding, $M^\perp$ is also a subspace of $V$ where all its vectors are perpendicular (orthogonal) to the columns of $M$, which would mean that the dot product of those vectors with each column of $M$ is $0$. However, how do I find the $M^\perp$ subspace, when $M = \{ \vec{x} \in \mathbb{R}: 2x_2 + 3x_1 = 0\}$? In general, if there's a general approach, how does one attempt to find $M^\perp$ of an arbitrary $M$? From my understanding, in this case, $M$ is the set of vectors in a plane that satisfy that equation, where $x_1$ and $x_2$ should respectively be the first and second components of the vector $\vec{x}$.
Let random vector $x = (x_1,...,x_n)$ follow multivariate normal distribution with mean $m$ and covariance matrix $S$. If $S$ is symmetric and positive definite (which is the usual case) then one can generate random samples from $x$ by first sampling indepently $r_1,...,r_n$ from standard normal and then using formula $m + Lr$, where $L$ is the Cholesky lower factor so that $S=LL^T$ and $r = (r_1,...,r_n)^T$. What about if one wants samples from singular Gaussian i.e. $S$ is still symmetric but not more positive definite (only positive semi-definite). We can assume also that the variances (diagonal elements of $S$) are strictly positive. Then some elements of $x$ must have linear relationship and the distribution actually lies in lower dimensional space with dimension $<n$, right? It is obvious that if e.g. $n=2, m = \begin{bmatrix} 0 \\ 0 \end{bmatrix}, S = \begin{bmatrix} 1 & 1 \\ 1 & 1\end{bmatrix}$ then one can generate $x_1 \sim N(0,1)$ and set $x_2=x_1$ since they are fully correlated. However, is there any good methods for generating samples for general case $n>2$? I guess one needs first to be able identify the lower dimensional subspace, then move to that space where one will have valid covariance matrix, then sample from it and finally deduce the values for the linearly dependent variables from this lower-dimensional sample. But what is the best way to that in practice? Can someone point me to books or articles that deal with the topic; I could not find one.
eng_Latn
1,044
I want to know that in PCA analysis or FAMD the lengths of arrows in correlation circle plot(which can be plotted by bellow code) is equal to which parameter(coefficient estimates,cos2,contribution,...) while their coordinates represent their loadings? fviz_pca_var(res.pca) res.pca is the result of fitting PCA analysis on our data. Any help would be greatly appreciated.
I am looking to implement a biplot for principal component analysis (PCA) in JavaScript. My question is, how do I determine the coordinates of the arrows from the $U,V,D$ output of the singular vector decomposition (SVD) of the data matrix? Here is an example biplot produced by R: biplot(prcomp(iris[,1:4])) I tried looking it up in the but it's not very useful. Or correct. Not sure which.
Anyone on the Android Jelly Bean OS and not able to buy Apps/Books through wallet? I have just bought the Samsung GalaxyNote 800 and am facing this issue. The error is (RPC:S-7:AEC-0). What is the solution?
eng_Latn
1,045
I will confine my question to the simple case of constructing a linear fit of one independent variable $X$ and one dependent variable $Y$, with no intercept term. The sample predictions are $\hat y_i = \beta x_i$. Define the $p$-norm as $$L^p = \left(\sum_{i=1}^n \vert y_i - \hat y_i \vert^p \right)^\frac{1}{p}$$ And define the $L^p$-regression of a sample $\lbrace x_i, y_i\rbrace_{i=1}^n$ as $\beta$ such that $L^p$ is minimized. I know how to do a least-squares ($L^2$) regression by solving the normal equations. I have also seen a linear programming procedure that does $L^1$ regression (minimizes the mean absolute error). And in the general case, you can define a reasonable search space and use numeric methods to find $\beta$ for arbitrary $p$. I am wondering about the $p=\infty$ case. It can be shown that $$L^\infty = \max \lbrace \vert y_i - \hat y_i \vert \rbrace$$ so an $L^\infty$ regression would amount to finding the value of $\beta$ that minimizes the maximum error. Can this be solved explicitly, or do I need to use numeric methods?
Is there any software package to solve the linear regression with the objective of minimizing the L-infinity norm.
I was doing the problem $$ A+B=AB\implies AB=BA. $$ $AB=BA$ means they're invertible, but I can't figure out how to show that $A+B=AB$ implies invertibility.
eng_Latn
1,046
Mathematics behind Activation functions In Machine learning we use activation functions to give non-linearity to the output of neuron. But what is the exact non-linearity in this context? How it differs in different activation functions(e.g. sigmoid, relu ...)?
Comprehensive list of activation functions in neural networks with pros/cons Are there any reference document(s) that give a comprehensive list of activation functions in neural networks along with their pros/cons (and ideally some pointers to publications where they were successful or not so successful)?
What's the relationship between initial eigenvalues and sums of squared loadings in factor analysis? On the one hand I read in a comment that: You can't speak of "eigenvalues" after rotation, even orthogonal rotation. Perhaps you mean sum of squared loadings for a principal component, after rotation. When rotation is oblique, this sum of squares tells nothing about the amount of variance explained, because components aren't orthogonal anymore. So, you shouldn't report any percentage of variance explained. On the other hand, I sometimes read in books people saying things like: The eigenvalues associated with each factor represent the variance explained by that particular factor; SPSS also displays the eigenvalue in terms of the percentage of variance explained (so factor 1 explains 31.696% of total variance). The first few factors explain relatively large amounts of variance (especially factor 1), whereas subsequent factors explain only small amounts of variance. SPSS then extracts all factors with eigenvalues greater than 1, which leaves us with four factors. The eigenvalues associated with these factors are again displayed (and the percentage of variance explained) in the columns labelled Extraction Sums of Squared Loadings. That text is from Field (2013) Discovering statistics using IBM SPSS, and this diagram accompanies it. I'm wondering Who is correct about whether it's possible to speak of eigenvalues after rotation? Would it matter if it was an oblique or orthogonal rotation? Why are the "initial eigenvalues" different from the "extraction sums of squared loadings"? Which is a better measure of total variation explained by the factors (or principal components or whatever method is used)? Should I say that the first four factors explain 50.317% of variation, or 40.477%?
eng_Latn
1,047
PCA loading with weights on samples I want to run PCA on a set of data, but I'd like to weigh each row of the input matrix(i.e. each data point) based on how recent it is. In other words, in my calculations of the PCs, I'd like more recent data points to be more important. How can I achieve this?
Weighted principal components analysis After some searching, I find very little on the incorporation of observation weights/measurement errors into principal components analysis. What I do find tends to rely on iterative approaches to include weightings (e.g., ). My question is why is this approach necessary? Why can't we use the eigenvectors of the weighted covariance matrix?
One-hot vs dummy encoding in Scikit-learn There are two different ways to encoding categorical variables. Say, one categorical variable has n values. converts it into n variables, while converts it into n-1 variables. If we have k categorical variables, each of which has n values. One hot encoding ends up with kn variables, while dummy encoding ends up with kn-k variables. I hear that for one-hot encoding, intercept can lead to collinearity problem, which makes the model not sound. Someone call it "". My questions: Scikit-learn's linear regression model allows users to disable intercept. So for one-hot encoding, should I always set fit_intercept=False? For dummy encoding, fit_intercept should always be set to True? I do not see any "warning" on the website. Since one-hot encoding generates more variables, does it have more degree of freedom than dummy encoding?
eng_Latn
1,048
What is the meaning of higher order derivatives like d²y/dx² I know velocity and acceleration are higher order derivatives of the position vector. Any other examples? What is the physical significance of higher order derivatives.?
Names of higher-order derivatives Specific derivatives have specific names. First order is often called tangency/velocity, second order is curvature/acceleration. I've also come across words like Jerk, Yank, Jounce, Jolt, Surge and Lurch for 3rd and 4th order derivatives. Is there a widely agreed list of names for these? How many orders have specific names? In this case, I'm dealing with NURBs curves, so the "tangency" and "curvature" related words are to be preferred over "velocity" and "acceleration" words.
How can top principal components retain the predictive power on a dependent variable (or even lead to better predictions)? Suppose I am running a regression $Y \sim X$. Why by selecting top $k$ principle components of $X$, does the model retain its predictive power on $Y$? I understand that from dimensionality-reduction/feature-selection point of view, if $v_1, v_2, ... v_k$ are the eigenvectors of covariance matrix of $X$ with top $k$ eigenvalues, then $Xv_1, Xv_2 ... Xv_k$ are top $k$ principal components with maximum variances. We can thereby reduce the number of features to $k$ and retain most of the predictive power, as I understand it. But why do top $k$ components retain the predictive power on $Y$? If we talk about a general OLS $Y \sim Z$, there is no reason to suggest that if feature $Z_i$ has maximum variance, then $Z_i$ has the most predictive power on $Y$. Update after seeing comments: I guess I have seen tons of examples of using PCA for dimensionality reduction. I have been assuming that means the dimensions we are left with have the most predictive power. Otherwise what's the point of dimensionality reduction?
eng_Latn
1,049
Usage Scenarios for Auto Encoders as alternatives to PCA Is there ever a practical situation where one would use a linear autoencoder over PCA?
What're the differences between PCA and autoencoder? Both PCA and autoencoder can do demension reduction, so what are the difference between them? In what situation I should use one over another?
Does a correlation matrix of two variables always have the same eigenvectors? I perform Principal Component Analysis using two variables that are standardized. This is done by applying a SVD on the correlation matrix of the concerned variates. However, the SVD gives me the same eigenvector (weights) irrespective of what the two variables are. It's always [.70710678, .70710678]. I find this strange. Of course, the eigenvalues differ. My question is: How to interpret this? PS. I wanted to conduct a total least squares regression on two variables. My statistical programme does not provide TLS, but TLS luckily equals Principal Component Analysis, as far as I know. Hence my question. The question is not about TLS directly, but why I get the same eigenvectors irrespective of which variables I use (as long as they are exactly 2).
eng_Latn
1,050
Reversing SVD back to the original variables I have a data matrix $M$ that has $n$ samples (rows) described by $m$ variables (columns) $X_1,X_2,\ldots X_m$. I do a SVD to reduce the $m$ dimensions to just 3 dimensions. I understand that the $x,y,z$ coordinates (i.e., the SVD values) are calculated from the eigenvectors of $MM^T$. My question is, if I pick an arbitrary point in the SVD space (i.e. a value for SVD1, SVD2, SVD3, not necessarily in the data), is there a convenient way to translate that back to a set of the original variables (i.e., $X_1, X_2, \ldots X_m$)?
How to reverse PCA and reconstruct original variables from several principal components? Principal component analysis (PCA) can be used for dimensionality reduction. After such dimensionality reduction is performed, how can one approximately reconstruct the original variables/features from a small number of principal components? Alternatively, how can one remove or discard several principal components from the data? In other words, how to reverse PCA? Given that PCA is closely related to singular value decomposition (SVD), the same question can be asked as follows: how to reverse SVD?
How to find a 4D vector perpendicular to 3 other 4D vectors? In 3 dimensions it is possible to find a vector c (one of infinitely many) perpendicular to two vectors a and b using the cross product. Is there any way of extending this to 4 dimensions, i.e. given three vectors a, b, and c finding a vector d perpendicular to all of them? I realize that this can be done by solving the equation system: dot(a, d) = 0 dot(b, d) = 0 dot(c, d) = 0 and imposing an additional constraint, for instance setting the length of d to or one of its components to 1, but is there a way of doing this that does not involve solving an equation system?
eng_Latn
1,051
Why is the magnitude of the gradient equal to the maximum rate of change at that point? I understand the concept of the gradient being a vector of the partials of f with respect to each variable, so essentially the gradient gives you a direction in the input field to travel in order to get the maximum increase in the function f. What I don't understand is why the magnitude of that gradient is the actual maximum rate of change of f - it feels right that it should be but I can't quite join the dots and see a proper reason for it.
Gradient and Swiftest Ascent I want to understand intuitively why it is that the gradient gives the direction of steepest ascent. (I will consider the case of $f:\mathbb{R}^2\to\mathbb{R}$) The standard proof is to note that the directional derivative is $$D_vf=v\cdot \nabla f=|\nabla f|\,\cos\theta$$ which is maximized at $\theta=0$. This is a good verification, but it doesn't really help me understand the result.
How can top principal components retain the predictive power on a dependent variable (or even lead to better predictions)? Suppose I am running a regression $Y \sim X$. Why by selecting top $k$ principle components of $X$, does the model retain its predictive power on $Y$? I understand that from dimensionality-reduction/feature-selection point of view, if $v_1, v_2, ... v_k$ are the eigenvectors of covariance matrix of $X$ with top $k$ eigenvalues, then $Xv_1, Xv_2 ... Xv_k$ are top $k$ principal components with maximum variances. We can thereby reduce the number of features to $k$ and retain most of the predictive power, as I understand it. But why do top $k$ components retain the predictive power on $Y$? If we talk about a general OLS $Y \sim Z$, there is no reason to suggest that if feature $Z_i$ has maximum variance, then $Z_i$ has the most predictive power on $Y$. Update after seeing comments: I guess I have seen tons of examples of using PCA for dimensionality reduction. I have been assuming that means the dimensions we are left with have the most predictive power. Otherwise what's the point of dimensionality reduction?
eng_Latn
1,052
Which independent variables are most important in predicting the response variable? I'm a biologist, and I have a large dataset that I'm trying to analyze. Here are the variables I'm working with: levels of 211 different metabolites in 16 different blood samples (predictor variables) how well each of the 16 blood samples performed in a specific test (response variable) I am trying to figure out which variable(s) are the most important in predicting the performance of a blood sample in the test. I would like to make these data more manageable by doing a PCA, but I'm new to this sort of analysis. I understand that a PCA will create groups of principal components (it will group metabolites that covary with each other and label each of these groups a principal component), but I'm not sure how to take into account the response variable in this analysis. Any help would be much appreciated!!! Thank you. > summary(metabolites_princomp) Importance of components: Comp.1 Comp.2 Comp.3 Comp.4 Standard deviation 3.9608225 0.40128486 0.259868774 0.215004349 Proportion of Variance 0.9805072 0.01006435 0.004220736 0.002889179 Cumulative Proportion 0.9805072 0.99057153 0.994792267 0.997681446 [...]
Detecting significant predictors out of many independent variables In a dataset of two non-overlapping populations (patients & healthy, total $n=60$) I would like to find (out of $300$ independent variables) significant predictors for a continuous dependent variable. Correlation between predictors is present. I am interested in finding out if any of the predictors are related to the dependent variable "in reality" (rather than predicting the dependent variable as exactly as possible). As I got overwhelmed with the numerous possible approaches, I would like to ask for which approach is most recommended. From my understanding stepwise inclusion or exclusion of predictors is E.g. run a linear regression separately for every predictor and correct p-values for multiple comparison using FDR (probably very conservative?) Principal-component regression: difficult to interpret as I won't be able to tell about the predictive power of individual predictors but only about the components. any other suggestions?
How to reverse PCA and reconstruct original variables from several principal components? Principal component analysis (PCA) can be used for dimensionality reduction. After such dimensionality reduction is performed, how can one approximately reconstruct the original variables/features from a small number of principal components? Alternatively, how can one remove or discard several principal components from the data? In other words, how to reverse PCA? Given that PCA is closely related to singular value decomposition (SVD), the same question can be asked as follows: how to reverse SVD?
eng_Latn
1,053
When using regularization wouldnt it make all parameters very small? In regularization, we add square of thetas multiplied by lambda(excluding theta_0). The value of lambda is high because values of theta should be close to zero to neglect the value of its associated feature. Now my question is when we apply gradient descent to set the values of theta which will result in best fit to data. Wouldn't it also reduce the values of all the thetas (which we don't want to reduce)? Because we are adding every value of theta at the end of the cost function. Which will result in a very low slope linear line causing underfitting.
Why does shrinkage work? In order to solve problems of model selection, a number of methods (LASSO, ridge regression, etc.) will shrink the coefficients of predictor variables towards zero. I am looking for an intuitive explanation of why this improves predictive ability. If the true effect of the variable was actually very large, why doesn't shrinking the parameter result in a worse prediction?
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,054
Unstable projection in LDA space in $n<p$ situation I'm trying to classify (LDA) few samples (n=12) in a high dimensional feature space (p=24) into 3 classes. First I reduced the dimension of my initial dataset with a PCA, keeping only the first two Eigen vectors. Update: turns out, I was actually using all 11 PCs for the LDA. Then I had a look at the projection of my n x 2 n x 11 dataset in the LDA space (1st vs 2nd Eigen vector) and I obtained the following: I was quite happy because the LDA found a strong separation between the 3 classes. So I tried a leave-one-out cross validation to evaluate the LDA. I trained the classifier with 11 samples and tested it with the last one, and looped around. The problem is the classifier performs at chance level (30% success rate). I noticed that the LDA space changes drastically between each iteration, depending on the 11 samples used to compute it. Moreover, when I project the tested sample in the corresponding LDA space, it falls quite far away from what should be its group, explaining the poor success rate. My questions are: is it normal that such a (visually) nice separation between classes leads to such a poor classification? Is it due to the small number of samples? Is there anything I can do to improve the situation?
Does it make sense to combine PCA and LDA? Assume I have a dataset for a supervised statistical classification task, e.g., via a Bayes' classifier. This dataset consists of 20 features and I want to boil it down to 2 features via dimensionality reduction techniques such as Principal Component Analysis (PCA) and/or Linear Discriminant Analysis (LDA). Both techniques are projecting the data onto a smaller feature subspace: with PCA, I would find the directions (components) that maximize the variance in the dataset (without considering the class labels), and with LDA I would have the components that maximize the between-class separation. Now, I am wondering if, how, and why these techniques can be combined and if it makes sense. For example: transforming the dataset via PCA and projecting it onto a new 2D subspace transforming (the already PCA-transformed) dataset via LDA for max. in-class separation or skipping the PCA step and using the top 2 components from a LDA. or any other combination that makes sense.
Fit mixture of distributions to your time-series data in R I have time-series data containing 1440 observations and the plot of the data is I want to fit the Gaussian Mixture Models (GMM) to the above plot, and for the same I am using Mclust function of package. Finally, I want a fit somewhat like this: On using Mclust function, I do get following statistics mclus_data &lt;- Mclust(givendataseries) &gt; summary(mclus_data) ---------------------------------------------------- Gaussian finite mixture model fitted by EM algorithm ---------------------------------------------------- Mclust E (univariate, equal variance) model with 8 components: log.likelihood n df BIC ICL 9525.438 1440 16 18934.52 18183.67 Clustering table: 1 2 3 4 5 6 7 8 1262 0 0 0 0 13 114 51 In the above statistic, I can not understand following: Significance of log.likelihood, BIC and ICL. I can understand what each of them is, but what their magnitude/value refers to? It shows there are 8 clusters, but why cluster no. 2,3,4,5 has 0 values? What does this mean? From the plot it is clear that there must be two Guassians, but why Mclust function shows there are 8 Guassians? Update: Actually, I want to do model based clustering of time series data. But currently I want to fit the distribution to my raw data, as shown in Figure 1 on page no. 3 of paper. For your quick reference, mentioned figure in said paper is
eng_Latn
1,055
Can you sell part of a pack to buy another pack in AL? Adventurer's League player guide: Selling Equipment. You can sell any mundane equipment that your character possesses using the normal rules in the PHB. Purchasing Equipment. You can purchase any equipment found in the PHB with your starting gold. Starting character is a wizard, who takes scholar's pack. Scholar's pack contains a &quot;book of lore&quot; (worth 25 gold), which I assume serves no real use in AL? Book sells for 12.5, buy an explorer's pack for 10, sell the extra backpack for 1. You now have an explorer's pack plus some free ink and paper, plus 3.5 gp. I honestly can't see any reason to take a scholar's pack and not also buy an explorer's pack, you're going to at the very least need rations, so can you just do it this way if your background doesn't give you 10 gold?
Can I sell starting gear in Adventurers League play? I'm starting a new AL game (as a player) and I have a question that's not addressed in the AL material. According to Adventurers League Player's Guide (page 4) When you create your D&amp;D Adventurers League character for the current season, take starting equipment as determined by your class and background. You cannot roll for your starting wealth. Meaning I can't start with whatever I want. However, I am playing a ranged-focused fighter, so the second gear option is of no use to me: (a) a martial weapon and a shield or (b) two martial weapons Can I choose any two martial weapons and sell them for half price to use for purchasing other gear before play? And is there a restriction on how much gold I can get? For example, two hand crossbows (martial ranged weapons) can net me 75 gp when sold, or two greatswords nets me 50 gp when sold.
What's the relationship between initial eigenvalues and sums of squared loadings in factor analysis? On the one hand I read in a comment that: You can't speak of "eigenvalues" after rotation, even orthogonal rotation. Perhaps you mean sum of squared loadings for a principal component, after rotation. When rotation is oblique, this sum of squares tells nothing about the amount of variance explained, because components aren't orthogonal anymore. So, you shouldn't report any percentage of variance explained. On the other hand, I sometimes read in books people saying things like: The eigenvalues associated with each factor represent the variance explained by that particular factor; SPSS also displays the eigenvalue in terms of the percentage of variance explained (so factor 1 explains 31.696% of total variance). The first few factors explain relatively large amounts of variance (especially factor 1), whereas subsequent factors explain only small amounts of variance. SPSS then extracts all factors with eigenvalues greater than 1, which leaves us with four factors. The eigenvalues associated with these factors are again displayed (and the percentage of variance explained) in the columns labelled Extraction Sums of Squared Loadings. That text is from Field (2013) Discovering statistics using IBM SPSS, and this diagram accompanies it. I'm wondering Who is correct about whether it's possible to speak of eigenvalues after rotation? Would it matter if it was an oblique or orthogonal rotation? Why are the "initial eigenvalues" different from the "extraction sums of squared loadings"? Which is a better measure of total variation explained by the factors (or principal components or whatever method is used)? Should I say that the first four factors explain 50.317% of variation, or 40.477%?
eng_Latn
1,056
How is linear momentum conserved after collision while part of linear kinetic energy contributes to angular kinetic energy Referring to the famous example of a horizontally moving sticky ball that collides (and sticks) at the tip of a vertically floating rod, then the combination moves along the ball's incident course while rotating. For the above case, we assune that the final linear momentum is exactly the same as the ball's linear momentum, and even if the ball hits the rod at its center of mass we still apply the same conservation of linear momentum principle. However, if we look at the things from linear KE's perspective, then the first case implies a split of the ball's initial KE over the linear and rotatiinal KEs of the final combination, which means less final linear KE than the initial ball's KE, and the second case implies that the combination will have the same initial linear KE of the ball (since no rotation will occur). My question is: How would the final combination in both cases have different linear KEs while having the same linear momentum (due the linear conservation)?
How can momentum but not energy be conserved in an inelastic collision? In inelastic collisions, kinetic energy changes, so the velocities of the objects also change. So how is momentum conserved in inelastic collisions?
Principal component analysis "backwards": how much variance of the data is explained by a given linear combination of the variables? I have carried out a principal components analysis of six variables $A$, $B$, $C$, $D$, $E$ and $F$. If I understand correctly, unrotated PC1 tells me what linear combination of these variables describes/explains the most variance in the data and PC2 tells me what linear combination of these variables describes the next most variance in the data and so on. I'm just curious -- is there any way of doing this "backwards"? Let's say I choose some linear combination of these variables -- e.g. $A+2B+5C$, could I work out how much variance in the data this describes?
eng_Latn
1,057
Why is "condensed nearest neighbour" Parametric? Definition of "condensed nearest neighbour", at training time it chooses the c "best" training examples (where c is a hyper-parameter), and at test time uses the usual KNN prediction but based only on these c training examples. So far, I looked-up many references and websites and researched on how to determine if a method is between parametric or non-parametric. I came up with below definitions, A parametric algorithm has a fixed number of parameters. In contrast, a non-parametric algorithm uses a flexible number of parameters, and the number of parameters often grows as it learns from more data. From . Moreover, I found, A parametric model, we have a finite number of parameters, and in nonparametric models, the number of parameters is (potentially) infinite. From . My question is, why do we count it as a Parametric model?
Parametric vs non-parametric machine learning methods I looked-up many references and websites and researched on how to determine if a method is between parametric or non-parametric. I came up with below definitions, A parametric algorithm has a fixed number of parameters. In contrast, a non-parametric algorithm uses a flexible number of parameters, and the number of parameters often grows as it learns from more data. From . Moreover, I found, A parametric model, we have a finite number of parameters, and in nonparametric models, the number of parameters is (potentially) infinite. From . And many other methods of determination, though the problem is none of them can helo ones to determine if a certain hypothetical method is parametric or not. (For instance, why k-means' number of parameters are constant but KNN is variable or basically what do we call a parameter and what we do not?)
How exactly to compute the ridge regression penalty parameter given the constraint? The accepted answer in does a great job of showing that there is a one-to-one correspondence between $c$ and $\lambda$ in the two formulations of the ridge regression: $$ \underset{\beta}{min}(y-X\beta)^T(y-X\beta) + \lambda\beta^T\beta $$ and $$ \underset{\beta}{min}(y-X\beta)^T(y-X\beta) \text{ s.t. }\beta^T\beta\leq{c} $$ The linked answer shows this in the orthogonal case. In a general (non-orthogonal case), how can I compute $\lambda$ from $c$? Update Here is an answer for going from $\lambda$ to $c$: Assuming that the coefs are constrained by the penalty, $$ \beta^T\beta = c $$ and $$ \beta = (X^TX + \lambda I)^{-1}X^Ty\\ \beta^T\beta = c = \beta^T(X^TX + \lambda I)^{-1}X^Ty $$ Still working on going the other way
eng_Latn
1,058
Is PostBQP experimentally relevant? Far from my expertise, but sheer curiosity. I've read that ("a complexity class consisting of all of the computational problems solvable in polynomial time on a quantum Turing machine with postselection and bounded error") is very powerful. Still, I don't understand the practical sense of assuming you can decide the value an output qubit takes. My question: Have post-selection quantum computing experiments been implemented (or is it possible that they will be implemented)? (And, if the answer is yes: how does post-selection take place in a way that practically enhances your computing power?)
What is postselection in quantum computing? A quantum computer can efficiently solve problems lying in the complexity class . I have seen a claim the one can (potentially, because we don't know whether BQP is a proper subset or equal to PP) increase the efficiency of a quantum computer by applying postselection and that the class of efficiently solvable problems becomes now postBQP = . What does postselection mean here?
How can top principal components retain the predictive power on a dependent variable (or even lead to better predictions)? Suppose I am running a regression $Y \sim X$. Why by selecting top $k$ principle components of $X$, does the model retain its predictive power on $Y$? I understand that from dimensionality-reduction/feature-selection point of view, if $v_1, v_2, ... v_k$ are the eigenvectors of covariance matrix of $X$ with top $k$ eigenvalues, then $Xv_1, Xv_2 ... Xv_k$ are top $k$ principal components with maximum variances. We can thereby reduce the number of features to $k$ and retain most of the predictive power, as I understand it. But why do top $k$ components retain the predictive power on $Y$? If we talk about a general OLS $Y \sim Z$, there is no reason to suggest that if feature $Z_i$ has maximum variance, then $Z_i$ has the most predictive power on $Y$. Update after seeing comments: I guess I have seen tons of examples of using PCA for dimensionality reduction. I have been assuming that means the dimensions we are left with have the most predictive power. Otherwise what's the point of dimensionality reduction?
eng_Latn
1,059
Prove spectral radius of a primitive matrix is 1 Let $P \in M (n \times n, \mathbb{R})$ be a primitive matrix. $1$ is a eigenvalue of $P$ and $(1,\dots,1)$ is the associated right eigenvector. How can show that the spectral radius $\rho(P):=$max$\{|\lambda| : \lambda$ is a eigenvalue of $P\}$ of $P$ is 1? Hint: Use Gelfand's formula.
Proof that the largest eigenvalue of a stochastic matrix is $1$ The largest eigenvalue of a (i.e. a matrix whose entries are positive and whose rows add up to $1$) is $1$. Wikipedia marks this as a special case of the , but I wonder if there is a simpler (more direct) way to demonstrate this result.
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,060
Geometric Proof for the shortest path cube sides problem A spider in one edge of a cube (length of all side = $l$) wants to get to an insect on the other edge of the cube. obviously the spider cannot fly and must walk on the sides of the cube to get to the insects. Find the shortest path possible. (See the image below) I know the differential approach to solve this question (optimization problem) and the path is drawn on the above picture. But I want a geometric proof without using derivative. Just using geometric theorems (like pythagoras) and using simple algebra.
How to find the shortest path between opposite vertices of a cube, traveling on its surface? I am stuck with the following problem that says: Let $A,B$ be the ends of the longest diagonal of the unit cube . The length of the shortest path from $A$ to $B$ along the surface is : $\sqrt{3}\,\,$ 2.$\,\,1+\sqrt{2}\,\,$ 3.$\,\,\sqrt{5}\,\,$ 4.$\,\,3$ My Try: So, the length of the longest diagonal $AB=\sqrt{3}$. If I reach from $A$ to $B$ along the surface line $AC+CD+BD$, then it gives $3$ units. But the answer is given to be option 3. Can someone explain? Thanks in advance for your time.
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,061
How to use v.kernel? I am studying the use of GRASS GIS within QGIS and I need a little help with the understanding of the v.kernel module. I didn't find any human help within the manual (very cryptic) I have a point data vector and I need to study the density of the points distribution in my mapset using a bandwidth of 1000 meters. How do I set the parameter &lt;stddeviation&gt;, or in other words ... what is this parameter related to? What is the correct use of this parameter in relation with the settings of the bandwidth? Can I simply set the parameter with a value of 1000?
How do you use GRASS's v.kernel? I am flummoxed on how to use GRASS's v.kernel. I have a vector layer of around 2.5 million points. I want to make a heat map using v.kernel to show concentrations, since I have variable instances with overlapping points, sometimes huge overlaps. I've already gotten this vector layer in GRASS, and it displays just fine. I've tried using GRASS's v.kernel command based on what I've seen here and on other forums, and I can't get it to do anything besides output a raster that's just a pink square. Here's the command I'm using: v.kernel --verbose input=master_grass7 output=master_grass7a_heatmap stddeviation=.0001 I've varied the stddeviation to all sorts of values from 1000000 to .000001, and it had no effect. I've read the repeatedly and don't really understand what it's getting at. At least, the instructions are on esoteric concepts, nothing practical. I've also checked the , and I'm not really understanding it, either. Yes, I can read C. The problem is it depends on a lot of stuff defined elsewhere in GRASS GIS. I've also done a lot of Google searching, and I can't find a comprehensive guide. All that I'm getting are scattered copies of the v.kernel doc/man page or people who apparently got it to work without a fuss. I've also checked up on the concept of kernel density estimation (KDE), and even then I don't see how to use the v.kernel command. That command appears to be a specific interpretation of KDE; its switches don't appear to correspond well to generic KDE concepts. So back to the main question here: how can someone who is not intimate with GRASS product development use the v.kernel command? Is there a plain language translation available?
How exactly to compute the ridge regression penalty parameter given the constraint? The accepted answer in does a great job of showing that there is a one-to-one correspondence between $c$ and $\lambda$ in the two formulations of the ridge regression: $$ \underset{\beta}{min}(y-X\beta)^T(y-X\beta) + \lambda\beta^T\beta $$ and $$ \underset{\beta}{min}(y-X\beta)^T(y-X\beta) \text{ s.t. }\beta^T\beta\leq{c} $$ The linked answer shows this in the orthogonal case. In a general (non-orthogonal case), how can I compute $\lambda$ from $c$? Update Here is an answer for going from $\lambda$ to $c$: Assuming that the coefs are constrained by the penalty, $$ \beta^T\beta = c $$ and $$ \beta = (X^TX + \lambda I)^{-1}X^Ty\\ \beta^T\beta = c = \beta^T(X^TX + \lambda I)^{-1}X^Ty $$ Still working on going the other way
eng_Latn
1,062
Principal Component Analysis PCA Terms and relationships: eigenvalues, eigenvectors, loadings, score matrix, and SVD I've read many websites, blogs, pdfs on this top but struggle to put the picture together in simple math terms, that explains how some of the terms relate to each other / are computed. Let's assume that we use Singular Value Decomposition (SVD) to solve the PCA problem with input matrix $X$, assuming $X$ has zero mean and unit variance, the SVD of $X$: $X = USV^{T}$ Now, $S$ is the vector of eigenvalues, sorted in decreasing order. Can someone please explain, in math terms, how are the following terms defined? e.g. Explained variance ratios = $\frac{S}{sum(S)}$ PCA Loadings == eigenvector of $X^{T}X$ == $V$? Score matrix $T$, $T == U * Diag(S) == XV$? Actual PCA components = eigenvectors of $X$? Eigenvectors, can you get them from the SVD results?
Relationship between SVD and PCA. How to use SVD to perform PCA? Principal component analysis (PCA) is usually explained via an eigen-decomposition of the covariance matrix. However, it can also be performed via singular value decomposition (SVD) of the data matrix $\mathbf X$. How does it work? What is the connection between these two approaches? What is the relationship between SVD and PCA? Or in other words, how to use SVD of the data matrix to perform dimensionality reduction?
Question on the correlation between two dependent variables I'm working on this question and it's stumping me. Let $S_n = X_1 + \ldots + X_n$ (with $n&gt;=1$) be a random walk with $X_1, \ldots, X_n$ be iid RV's. $$ E(X_k)=\mu,\,{\rm Var}(X_k)=\sigma^2. $$ Find the covariance of $S_n$ and $S_m$ Can anyone help out? I am trying to use the equation: $${\rm Cov}[S_n, S_m] = E[S_nS_m] - E[S_n]E[S_m]$$ but can't quite figure it out.
eng_Latn
1,063
Using the 'U' Matrix of SVD as Feature Reduction This is a follow-up to the question asked regarding SVD and dimensionality reduction (). In that question I asked how to use SVD for dimensionality reduction. Although not stated, the ultimate goal here is to use the reduced feature set and input them into a classification or regression algorithm. I have learned that SVD is a technique used by prcomp in R, as the "v" matrix from a run of svd on a centered and scaled matrix is the same as the loadings (eigen vectors) from a PCA using the traditional eigen decomposition on a correlation matrix: data(iris) #these two match eigen(cor(iris[,-5])) #eigen vectors svd(scale(iris[,-5]))$v This has helped with my understanding of the connection between SVD and PCA. However, I have two additional questions: 1) Why do the following differ in signs for the first PC? Is this OK? svd(cor(iris[,-5]))$u svd(scale(iris[,-5]))$v 2) To match the output of prcomp, one can multiply the scaled/centered original data by the 'v' matrix from SVD: PCSCORE1&lt;-scale(iris[,-5]) %*% SVD2$v[,1:2] #PC scores from SVD PCSCORE1[1:10,] #PC scores from first 2 PC #matches this PCA&lt;-prcomp(iris[,-5], center = TRUE, scale =TRUE) PCA$x[1:10,1:2] but I have seen in multiple locations (e.g. ) and the rapidminer package (a machine learning tool written in JAVA) that just the 'u' matrix that results from running svd on the center/scaled input matrix X is used as the PC scores. What is the connection of u to Xv and if u can be used, why does prcomp compute Xv? Mechanically u is Xvdiag(1/d) so the eigen vectors related to the largest eigen values are scaled down - why is this used?
Relationship between SVD and PCA. How to use SVD to perform PCA? Principal component analysis (PCA) is usually explained via an eigen-decomposition of the covariance matrix. However, it can also be performed via singular value decomposition (SVD) of the data matrix $\mathbf X$. How does it work? What is the connection between these two approaches? What is the relationship between SVD and PCA? Or in other words, how to use SVD of the data matrix to perform dimensionality reduction?
The proof of shrinking coefficients using ridge regression through "spectral decomposition" I have understood how ridge regression shrinks coefficients towards zero geometrically. Moreover I know how to prove that in the special "Orthonormal Case," but I am confused how that works in the general case via "Spectral decomposition."
eng_Latn
1,064
PCA Why covariance matrix? At PCA why we find the Eigenvalues of the covariance matrix and not the eigenvalues of the matrix $A\times A^T$, where $A$ is the data matrix and $A^T$ its transpose? I saw a professor at YouTube who explained PCA but he said that the solution is the eigenvalues of $A\times A^T$.
Relationship between SVD and PCA. How to use SVD to perform PCA? Principal component analysis (PCA) is usually explained via an eigen-decomposition of the covariance matrix. However, it can also be performed via singular value decomposition (SVD) of the data matrix $\mathbf X$. How does it work? What is the connection between these two approaches? What is the relationship between SVD and PCA? Or in other words, how to use SVD of the data matrix to perform dimensionality reduction?
How does leave-one-out cross-validation work? How to select the final model out of $n$ different models? I have some data and I want to build a model (say a linear regression model) out of this data. In a next step, I want to apply Leave-One-Out Cross-Validation (LOOCV) on the model so see how good it performs. If I understood LOOCV right, I build a new model for each of my samples (the test set) using every sample except this sample (the training set). Then I use the model to predict the test set and calculate the errors $(\text{predicted} - \text{actual})$. In a next step I aggregate all the errors generated using a chosen function, for example mean squared error. I can use these values to judge on the quality (or goodness of fit) of the model. Question: Which model is the model these quality-values apply for, so which model should I choose if I find the metrics generated from LOOCV appropriate for my case? LOOCV looked at $n$ different models (where $n$ is the sample size); which one is the model I should choose? Is it the model which uses all the samples? This model was never calculated during the LOOCV process! Is it the model which has the least error?
eng_Latn
1,065
How does the hyperparameter lambda affect L2 norm Let's say I have L2 regularization in ridge regression: How would I go about giving a formal mathematical proof that I know that the larger the lambda the smaller the L2 norm. But, I don't know how to give a mathematical proof
The proof of shrinking coefficients using ridge regression through "spectral decomposition" I have understood how ridge regression shrinks coefficients towards zero geometrically. Moreover I know how to prove that in the special "Orthonormal Case," but I am confused how that works in the general case via "Spectral decomposition."
Is there a command for large middle delimiters consistent with \bigl and \bigr? After browsing through related threads, I am now of the understanding (please correct me if I am wrong) that the rule of thumb is to use the \bigl,\bigr pair with brackets, parentheses, etc. for operators like sums, products, and integrals, and \left,\right in all other cases. I only recently found out about \middle , which has solved what had been a vexing problem for me: delimiters for function arguments (slashes, bars) not scaling properly. Leafing through my cheatsheet has not been of much help, so I wish to ask: is there an equivalent of \middle for \bigl and \bigr, or would the use of \middle along with that pair not be gauche?
eng_Latn
1,066
Implementation of PCA using SVD without creating covariance matrix So I'm currently taking a Machine Learning course and have correctly submitted my implementation of PCA. I used SVD. Here it is Octave. function [U, S] = pca(X) %PCA Run principal component analysis on the dataset X % [U, S, X] = pca(X) computes eigenvectors of the covariance matrix of X % Returns the eigenvectors U, the eigenvalues (on diagonal) in S % % Useful values [m, n] = size(X); % You need to return the following variables correctly. U = zeros(n); S = zeros(n); % ====================== YOUR CODE HERE ====================== % Instructions: You should first compute the covariance matrix. Then, you % should use the "svd" function to compute the eigenvectors % and eigenvalues of the covariance matrix. % % Note: When computing the covariance matrix, remember to divide by m (the % number of examples). % covarianceMatrix = (X' * X) ./ m; [U, S, V] = svd(covarianceMatrix); % ========================================================================= end I thought the point of using SVD in PCA implementations was to improve the computational efficiency and therefore not compute the covariance matrix (which can cause loss of precision). How come I was expected to create the covariance matrix? How can I implement PCA without creating a covariance matrix?
Relationship between SVD and PCA. How to use SVD to perform PCA? Principal component analysis (PCA) is usually explained via an eigen-decomposition of the covariance matrix. However, it can also be performed via singular value decomposition (SVD) of the data matrix $\mathbf X$. How does it work? What is the connection between these two approaches? What is the relationship between SVD and PCA? Or in other words, how to use SVD of the data matrix to perform dimensionality reduction?
Given the key, the plain text and the cipher text can I calculate the IV used in CBC mode? If I have the plain text, the ciphertext and the key for an AES-128 CBC operation, can I determine the IV, even if I don't know the padding (assuming the padding follows one of the more common formats)? I believe it should be possible since my understanding is that the IV is only used as an initial XOR of the plain text in encryption, and should, therefore, be able to be available like this: AESDecrypt(key, ciphertext) =&gt; (PlainText XOR IV); IV = ((Plaintext XOR IV) XOR (Plaintext)) Is this correct?
eng_Latn
1,067
I perform Principal Component Analysis using two variables that are standardized. This is done by applying a SVD on the correlation matrix of the concerned variates. However, the SVD gives me the same eigenvector (weights) irrespective of what the two variables are. It's always [.70710678, .70710678]. I find this strange. Of course, the eigenvalues differ. My question is: How to interpret this? PS. I wanted to conduct a total least squares regression on two variables. My statistical programme does not provide TLS, but TLS luckily equals Principal Component Analysis, as far as I know. Hence my question. The question is not about TLS directly, but why I get the same eigenvectors irrespective of which variables I use (as long as they are exactly 2).
I've been testing PCA via SVD to decompose a simple time series data matrix, $X$. I have two signals $x_1(t)$ and $x_2(t)$ in a data matrix where $M$ rows represents each timepoint sample and each column represents $x_1$ and $x_2$. The mean signal, $\hat{x}$, is defined as the mean along the row axis (average of $x_1$ and $x_2$ along each timepoint). I normalize each column of $X$ by subtracting its mean and dividing by the standard deviation. When I use [U S V] = svd(Xz) in matlab, regardless of how the variables are distributed (whether they are correlated or uncorrelated), one of the columns of the right singular matrix, V, always points in the same direction (to a multiplicative constant) as the mean vector $(1/2, 1/2)$. But when add an additional third time series, this is never the case (where the mean vector is $(1/3, 1/3, 1/3)$. Because I normalize the standard deviation for each vector, it does make sense that the direction of most variance given by PCA would be the diagonal 45 degree line. But if both variables $x_1$ and $x_2$ are independent gaussians, couldn't the PCA direction be any direction since the distribution is radially symmetric? MATLAB Code: s = RandStream('mcg16807', 'Seed', 0); RandStream.setDefaultStream(s); G = zeros(1000,2); G(:,1) = 40*randn(1000,1)-100; G(:,2) = tan(G(:,1)) + randn(1000,1); X = G - repmat(mean(G),[size(G,1), 1]); Xz = X./repmat(std(G),[size(G,1), 1]); [U S V] = svd(Xz); G_mean = mean(Xz,2); corrcoef(G_mean, U(:,1)) corrcoef(G_mean, U(:,2))
The new Top-Bar does not show reputation changes from Area 51.
eng_Latn
1,068
I'm working on an implementation of PCA that works on very large data sets. Based on my understanding of the algorithm, the first step is to do an of the input m x n matrix, X. This SVD looks like X = WΣVT. The "interesting" output Y of this process -- , "The PCA transformation that preserves dimensionality (that is, gives the same number of principal components as original variables)" -- is given by the following equation: Based on my reading, if I can compute the W component of the SVD, then I can compute Y as: The upshot here is that I'd only have to compute W. In terms of computational and memory complexity, this approach is significantly more efficient because the only matrix above and beyond the initial data set I'd have to load has size m x m, which (at least in my case) is much smaller than V, which would be n x n. Is there some reason why this derivation won't work that I'm not seeing?
Principal component analysis (PCA) is usually explained via an eigen-decomposition of the covariance matrix. However, it can also be performed via singular value decomposition (SVD) of the data matrix $\mathbf X$. How does it work? What is the connection between these two approaches? What is the relationship between SVD and PCA? Or in other words, how to use SVD of the data matrix to perform dimensionality reduction?
Is it (always) true that $$\mathrm{Var}\left(\sum\limits_{i=1}^m{X_i}\right) = \sum\limits_{i=1}^m{\mathrm{Var}(X_i)} \&gt;?$$
eng_Latn
1,069
Is there any customs problem while checking at the airport?
I am planning to travel to India (COK airport, Kerala) from United States. I want to take my laptop as well as my tablet (iPad). Am I allowed to take both in my backpack with my documents related my work. I think I am allowed to take only one computer as per the rule. Will the customs consider my iPad as a computer? What all I can take? One of my friends took 2 laptops (1 work and 1 personal) and had trouble at the airport. Also I am planning to take some wristwatches (7 of them, each avg price is 45 $). I read in a website that I can take stuff up to max 35000 INR. Is that true? In that case, how will I manage my stuff?
I have 10 years of daily returns data for 28 different currencies. I wish to extract the first principal component, but rather than operate PCA on the whole 10 years, I want to rollapply a 2 year window, because the currencies' behaviours evolve and so I wish to reflect this. However I have a major problem, that is that both the princomp() and prcomp() functions will often jump from positive to negative loadings in adjacent PCA analyses (ie 1 day apart). Have a look at the loading chart for the EUR currency: Clearly I can't use this because adjacent loadings will jump from positive to negative, so my series which uses them will be erroneous. Now take a look at the absolute value of the EUR currency loading: The problem is of course that I still cannot use this because you can see from the top chart that the loading does go from negative to positive and back at times, a characteristic which I need to preserve. Is there any way I can get around this problem? Can I force the eigenvector orientation to always be the same in adjacent PCAs? By the way this problem also occurs with the FactoMineR PCA() function. The code for the rollapply is here: rollapply(retmat, windowl, function(x) summary(princomp(x))$loadings[, 1], by.column = FALSE, align = "right") -&gt; princomproll
eng_Latn
1,070
I am currently trying to use classification analysis for some EEG data. As such data is of very high dimensionality, I am looking at using PCA for dimensionality reduction to prevent overfitting of the classification models. My data structure is approximately 50 (rows, observations) times 38000 (columns, variables). I used the Matlab ‘pca’ function to generate principal components from my variables. I have three questions about this. First, as stated on the Mathworks website (), rows of the input matrix X should correspond to observations and columns to variables, which is the case for my approach. However, the number of principal components is always equal to rows/observations-1 (I tried using different numbers of rows). Why is this the case? Should it be this way? To me, it would be more intuitive if the number of (maximal) components would be equal to columns/variables-1. Also, I observed that the sum of the output variable ‘explained’ is always 100, whether I have 5 or 50 principal components. Am I right to assume that this variable therefore does not refer to the proportion of the original data’s variance explained by the principal components but rather reflects the spread of ‘principal component’ variance across individual components? How can I find out the former? That is, how much of my data’s variance is included in the resulting principal components? Or do principal components always reflect the whole variance, no matter how few they might be? Finally, I understand the ‘scores’ variable so that it reflects my data’s variance, meaning that it can be used analogously to my original data’s variables (e.g. columns). Is this right? Or do I have to project my data back to the original axes after performing PCA and using only a subset of the components? If so, how do I then even reduce input dimensions? I tried ‘reversing’ PCA and I received the same number of variables as before, just with different values in the matrix.
In PCA, when the number of dimensions $d$ is greater than (or even equal to) the number of samples $N$, why is it that you will have at most $N-1$ non-zero eigenvectors? In other words, the rank of the covariance matrix amongst the $d\ge N$ dimensions is $N-1$. Example: Your samples are vectorized images, which are of dimension $d = 640\times480 = 307\,200$, but you only have $N=10$ images.
The entire site is blank right now. The header and footer are shown, but no questions.
eng_Latn
1,071
I have read many articles about PLS, but I could not understand the mathematical description yet. I know that it is quite similar to principal component regression (PCR), except that it takes into account the direction of the response variable. Could you please provide for me a simple mathematical explanation e.g the formula and coefficient estimates form?
Can anyone recommend a good exposition of the theory behind partial least squares regression (available online) for someone who understands SVD and PCA? I have looked at many sources online and have not found anything that had the right combination of rigor and accessibility. I have looked into The Elements of Statistical Learning, which was suggested in a comment on a question asked on Cross Validated, , but I don't think that this reference does the topic justice (it's too brief to do so, and doesn't provide much theory on the subject). From what I've read, PLS exploits linear combinations of the predictor variables, $z_i=X \varphi_i$ that maximize the covariance $ y^Tz_i $ subject to the constraints $\|\varphi_i\|=1$ and $z_i^Tz_j=0$ if $i \neq j$, where the $\varphi_i$ are chosen iteratively, in the order in which they maximize the covariance. But even after all I've read, I'm still uncertain whether that is true, and if so, how the method is executed.
The entire site is blank right now. The header and footer are shown, but no questions.
eng_Latn
1,072
Lets say $a_1, a_n$ are normed vectors. Why is the set $C = \{\Sigma_{i=1}^n \lambda_ia_i: \lambda_i \ge0\}$ closed? The $\lambda$'s can be any non-negative numbers. So C is the set of all non-negative linear combinations of the n vectors. I tried both the method with open balls, and the method with a converging sequence(show that the point the series is convergin to is in C), but still I do not see why. (PS: If it is untrue in the generalized version, you can assume that the vectors are in $\mathbb{R}^n$.)
Looking for the proof of the lemma asserting that the conical surface (envelope) is a closed space. Thank you.
I am trying to fit a SVM to my data. My dataset contains 3 classes and I am performing 10 fold cross validation (in LibSVM): ./svm-train -g 0.5 -c 10 -e 0.1 -v 10 training_data The help thereby states: -c cost : set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1) For me, providing higher cost (C) values gives me higher accuracy. What does C in SVM actually mean? Why and when should I use higher/lower values (or the LibSVM given default value) of C?
eng_Latn
1,073
When should we use PCA over factor analysis? Aren't they essentially the same thing except that factor analysis is modeling observed variables as linear combinations of unobserved factors? Whereas PCA is modeling components as linear combinations of observed variables?
It seems that a number of the statistical packages that I use wrap these two concepts together. However, I'm wondering if there are different assumptions or data 'formalities' that must be true to use one over the other. A real example would be incredibly useful.
For a given data matrix $A$ (with variables in columns and data points in rows), it seems like $A^TA$ plays an important role in statistics. For example, it is an important part of the analytical solution of ordinary least squares. Or, for PCA, its eigenvectors are the principal components of the data. I understand how to calculate $A^TA$, but I was wondering if there's an intuitive interpretation of what this matrix represents, which leads to its important role?
eng_Latn
1,074
Let random vector $x = (x_1,...,x_n)$ follow multivariate normal distribution with mean $m$ and covariance matrix $S$. If $S$ is symmetric and positive definite (which is the usual case) then one can generate random samples from $x$ by first sampling indepently $r_1,...,r_n$ from standard normal and then using formula $m + Lr$, where $L$ is the Cholesky lower factor so that $S=LL^T$ and $r = (r_1,...,r_n)^T$. What about if one wants samples from singular Gaussian i.e. $S$ is still symmetric but not more positive definite (only positive semi-definite). We can assume also that the variances (diagonal elements of $S$) are strictly positive. Then some elements of $x$ must have linear relationship and the distribution actually lies in lower dimensional space with dimension $&lt;n$, right? It is obvious that if e.g. $n=2, m = \begin{bmatrix} 0 \\ 0 \end{bmatrix}, S = \begin{bmatrix} 1 &amp; 1 \\ 1 &amp; 1\end{bmatrix}$ then one can generate $x_1 \sim N(0,1)$ and set $x_2=x_1$ since they are fully correlated. However, is there any good methods for generating samples for general case $n&gt;2$? I guess one needs first to be able identify the lower dimensional subspace, then move to that space where one will have valid covariance matrix, then sample from it and finally deduce the values for the linearly dependent variables from this lower-dimensional sample. But what is the best way to that in practice? Can someone point me to books or articles that deal with the topic; I could not find one.
I estimated the sample covariance matrix $C$ of a sample and get a symmetric matrix. With $C$, I would like to create $n$-variate normal distributed r.n. but therefore I need the Cholesky decomposition of $C$. What should I do if $C$ is not positive definite?
Let random vector $x = (x_1,...,x_n)$ follow multivariate normal distribution with mean $m$ and covariance matrix $S$. If $S$ is symmetric and positive definite (which is the usual case) then one can generate random samples from $x$ by first sampling indepently $r_1,...,r_n$ from standard normal and then using formula $m + Lr$, where $L$ is the Cholesky lower factor so that $S=LL^T$ and $r = (r_1,...,r_n)^T$. What about if one wants samples from singular Gaussian i.e. $S$ is still symmetric but not more positive definite (only positive semi-definite). We can assume also that the variances (diagonal elements of $S$) are strictly positive. Then some elements of $x$ must have linear relationship and the distribution actually lies in lower dimensional space with dimension $&lt;n$, right? It is obvious that if e.g. $n=2, m = \begin{bmatrix} 0 \\ 0 \end{bmatrix}, S = \begin{bmatrix} 1 &amp; 1 \\ 1 &amp; 1\end{bmatrix}$ then one can generate $x_1 \sim N(0,1)$ and set $x_2=x_1$ since they are fully correlated. However, is there any good methods for generating samples for general case $n&gt;2$? I guess one needs first to be able identify the lower dimensional subspace, then move to that space where one will have valid covariance matrix, then sample from it and finally deduce the values for the linearly dependent variables from this lower-dimensional sample. But what is the best way to that in practice? Can someone point me to books or articles that deal with the topic; I could not find one.
eng_Latn
1,075
Why in spontaneous symmetry breaking do we only look at the scalar fields? In all the examples of SSB in our course/the books (even in our SUSY course) we have just looked at minima of the scalar potential. Why do we restrict ourselves to the scalars, why not also minimise the fermions and vector bosons?
Why cannot fermions have non-zero vacuum expectation value? In quantum field theory, scalar can take non-zero vacuum expectation value (vev). And this way they break symmetry of the Lagrangian. Now my question is what will happen if the fermions in the theory take non-zero vacuum expectation value? What forbids fermions to take vevs?
How can top principal components retain the predictive power on a dependent variable (or even lead to better predictions)? Suppose I am running a regression $Y \sim X$. Why by selecting top $k$ principle components of $X$, does the model retain its predictive power on $Y$? I understand that from dimensionality-reduction/feature-selection point of view, if $v_1, v_2, ... v_k$ are the eigenvectors of covariance matrix of $X$ with top $k$ eigenvalues, then $Xv_1, Xv_2 ... Xv_k$ are top $k$ principal components with maximum variances. We can thereby reduce the number of features to $k$ and retain most of the predictive power, as I understand it. But why do top $k$ components retain the predictive power on $Y$? If we talk about a general OLS $Y \sim Z$, there is no reason to suggest that if feature $Z_i$ has maximum variance, then $Z_i$ has the most predictive power on $Y$. Update after seeing comments: I guess I have seen tons of examples of using PCA for dimensionality reduction. I have been assuming that means the dimensions we are left with have the most predictive power. Otherwise what's the point of dimensionality reduction?
eng_Latn
1,076
All function only checks last element of vector in R Sorry about my last lengthy post. Here is a more condensed problem that I am running into with the all function. As an example, below you can see my console output in which the all function seems to only check whether the last element of x satisfies the all condition. &gt; x [1] 0.7583398 0.8352907 0.7004333 0.7004333 0.7004333 0.7004333 0.7004333 0.7004333 0.7004333 0.7004333 0.7004333 0.8683173 0.8683173 0.7004333 [15] 0.7004333 0.8683173 0.8683173 0.8683173 0.7004333 0.7004333 0.7004333 0.7004333 0.8683173 0.8683173 0.7004333 0.7004333 0.7004333 0.7004333 [29] 0.7004333 0.8683173 0.7004333 0.7004333 0.7004333 0.7004333 0.7004333 0.8683173 0.7004333 0.7004333 0.8683173 0.7004333 0.7004333 0.7004333 [43] 0.7004333 0.7004333 0.7004333 0.7004333 0.7004333 0.8683173 0.7004333 0.7004333 0.7004333 0.7004333 0.7004333 0.7004333 0.7004333 0.7004333 [57] 0.8683173 0.7004333 0.8683173 0.7004333 0.7004333 0.7004333 0.8683173 0.8683173 0.7004333 0.7004333 0.7004333 0.7004333 0.7004333 0.7004333 [71] 0.8683173 0.7004333 0.7004333 0.7004333 0.7004333 0.8683173 0.7004333 0.7004333 0.7004333 0.7004333 0.7004333 0.8683173 0.8683173 &gt; x1[t] [1] 0.7004333 &gt; x2[t] [1] 0.8683173 &gt; x3[t] [1] 0.8683173 &gt; all(c(x1[t],x2[t], x3[t]) %in% x) [1] FALSE &gt; all(c(x2[t],x2[t], x3[t]) %in% x) [1] TRUE &gt; all(c(x1[t],x1[t], x1[t]) %in% x) [1] FALSE
Is floating point math broken? Consider the following code: 0.1 + 0.2 == 0.3 -&gt; false 0.1 + 0.2 -&gt; 0.30000000000000004 Why do these inaccuracies happen?
Local polynomial regression: Why does the variance increase monotonically in the degree? How can I show that the variance of local polynomial regression is increasing with the degree of the polynomial (Exercise 6.3 in Elements of Statistical Learning, second edition)? This question has been asked but the answer just states it follows easliy. More precisely, we consider $y_{i}=f(x_{i})+\epsilon_{i}$ with $\epsilon_{i}$ being independent with standard deviation $\sigma.$ The estimator is given by $$ \hat{f}(x_{0})=\left(\begin{array}{ccccc} 1 &amp; x_{0} &amp; x_{0}^{2} &amp; \dots &amp; x_{0}^{d}\end{array}\right)\left(\begin{array}{c} \alpha\\ \beta_{1}\\ \vdots\\ \beta_{d} \end{array}\right) $$ for $\alpha,\beta_{1},\dots,\beta_{d}$ solving the following weighted least squares problem $$ \min\left(y_{d}-\underbrace{\left(\begin{array}{ccccc} 1 &amp; x_{1} &amp; x_{1}^{2} &amp; \dots &amp; x_{1}^{d}\\ \vdots\\ 1 &amp; &amp; &amp; &amp; x_{n}^{d} \end{array}\right)}_{X}\left(\begin{array}{c} \alpha\\ \beta_{1}\\ \vdots\\ \beta_{d} \end{array}\right)\right)^{t}W\left(y-\left(\begin{array}{ccccc} 1 &amp; x_{1} &amp; x_{1}^{2} &amp; \dots &amp; x_{1}^{d}\\ \vdots\\ 1 &amp; &amp; &amp; &amp; x_{n}^{d} \end{array}\right)\left(\begin{array}{c} \alpha\\ \beta_{1}\\ \vdots\\ \beta_{d} \end{array}\right)\right) $$ for $W=\text{diag}\left(K(x_{0},x_{i})\right)_{i=1\dots n}$ with $K$ being the regression kernel. The solution to the weighted least squares problem can be written as $$ \left(\begin{array}{cccc} \alpha &amp; \beta_{1} &amp; \dots &amp; \beta_{d}\end{array}\right)=\left(X^{t}WX\right)^{-1}X^{t}WY. $$ Thus, for $l(x_{0})=\left(\begin{array}{ccccc} 1 &amp; x_{0} &amp; x_{0}^{2} &amp; \dots &amp; x_{0}^{d}\end{array}\right)\left(X^{t}WX\right)^{-1}X^{t}W$ we obtain $$ \hat{f}(x_{0})=l(x_{0})Y $$ implying that $$ \text{Var }\hat{f}(x_{0})=\sigma^{2}\left\Vert l(x_{0})\right\Vert ^{2}=\left(\begin{array}{ccccc} 1 &amp; x_{0} &amp; x_{0}^{2} &amp; \dots &amp; x_{0}^{d}\end{array}\right)\left(X^{t}WX\right)^{-1}X^{t}W^{2}X\left(X^{t}WX\right)^{-1}\left(\begin{array}{ccccc} 1 &amp; x_{0} &amp; x_{0}^{2} &amp; \dots &amp; x_{0}^{d}\end{array}\right)^{t}. $$ My approach: An induction using the formula for the inverse of a block matrix but I did not succeed. The paper by D. Ruppert and M. P. Wand derives an asymptotic expression for the variance for $n\rightarrow\infty$ in Theorem 4.1 but it is not clear that is increasing in the degree.
yue_Hant
1,077
Adding more samples to ordinary regression is equall to ridge regression I am a beginner in machine learning. I have a question why adding more samples to a data set is equal to adding regularization term to the loss function? (In other words why can I add more samples to my data set and solve OLS instead of solving ridge regression?)
How to derive the ridge regression solution? I am having some issues with the derivation of the solution for ridge regression. I know the regression solution without the regularization term: $$\beta = (X^TX)^{-1}X^Ty.$$ But after adding the L2 term $\lambda\|\beta\|_2^2$ to the cost function, how come the solution becomes $$\beta = (X^TX + \lambda I)^{-1}X^Ty.$$
What's the relationship between initial eigenvalues and sums of squared loadings in factor analysis? On the one hand I read in a comment that: You can't speak of "eigenvalues" after rotation, even orthogonal rotation. Perhaps you mean sum of squared loadings for a principal component, after rotation. When rotation is oblique, this sum of squares tells nothing about the amount of variance explained, because components aren't orthogonal anymore. So, you shouldn't report any percentage of variance explained. On the other hand, I sometimes read in books people saying things like: The eigenvalues associated with each factor represent the variance explained by that particular factor; SPSS also displays the eigenvalue in terms of the percentage of variance explained (so factor 1 explains 31.696% of total variance). The first few factors explain relatively large amounts of variance (especially factor 1), whereas subsequent factors explain only small amounts of variance. SPSS then extracts all factors with eigenvalues greater than 1, which leaves us with four factors. The eigenvalues associated with these factors are again displayed (and the percentage of variance explained) in the columns labelled Extraction Sums of Squared Loadings. That text is from Field (2013) Discovering statistics using IBM SPSS, and this diagram accompanies it. I'm wondering Who is correct about whether it's possible to speak of eigenvalues after rotation? Would it matter if it was an oblique or orthogonal rotation? Why are the "initial eigenvalues" different from the "extraction sums of squared loadings"? Which is a better measure of total variation explained by the factors (or principal components or whatever method is used)? Should I say that the first four factors explain 50.317% of variation, or 40.477%?
eng_Latn
1,078
Is there a geometric intuition for integration by parts? Is there a geometric intuition for integration by parts? $$\int f(x)g'(x)\,dx = f(x)g(x) - \int g(x)f'(x)\,dx$$ This can, of course, be shown algebraically by product rule, but still where is geometric intuition? I have seen geometry of IBP using parametric equations but I don't get it. Newest edit: few similar questions has been asked before, but they use parametric equations to show geometry behind IBP. I am interested if there is geometric intuition which uses functions in Cartesian plane or some other, maybe more natural, explanation.
What is integration by parts, really? Integration by parts comes up a lot - for instance, it appears in the definition of a weak derivative / distributional derivative, or as a tool that one can use to turn information about higher derivatives of a function into information about an integral of that function. Concrete examples of this latter category include: proving that $f \in C^2(S^1)$ implies that the Fourier series of $f$ converges absolutely and uniformly, and the Taylor series expansion with the integral formula for remainder. However, I don't feel like I really understand what integration by parts is really doing. To me, it is just an algebraic trick that follows from the fundamental theorem of calculus and the product rule. Is there some more conceptual way to think about it? How do you think about this useful idea?
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,079
Does matrix multiplication preserve positive semi-definiteness [PSD]? If $A,B$ are two PSD matrices, will $AB$ also be PSD?
Is the product of symmetric positive semidefinite matrices positive definite? I see on Wikipedia that the product of two commuting symmetric positive definite matrices is also positive definite. Does the same result hold for the product of two positive semidefinite matrices? My proof of the positive definite case falls apart for the semidefinite case because of the possibility of division by zero...
What's the relationship between initial eigenvalues and sums of squared loadings in factor analysis? On the one hand I read in a comment that: You can't speak of "eigenvalues" after rotation, even orthogonal rotation. Perhaps you mean sum of squared loadings for a principal component, after rotation. When rotation is oblique, this sum of squares tells nothing about the amount of variance explained, because components aren't orthogonal anymore. So, you shouldn't report any percentage of variance explained. On the other hand, I sometimes read in books people saying things like: The eigenvalues associated with each factor represent the variance explained by that particular factor; SPSS also displays the eigenvalue in terms of the percentage of variance explained (so factor 1 explains 31.696% of total variance). The first few factors explain relatively large amounts of variance (especially factor 1), whereas subsequent factors explain only small amounts of variance. SPSS then extracts all factors with eigenvalues greater than 1, which leaves us with four factors. The eigenvalues associated with these factors are again displayed (and the percentage of variance explained) in the columns labelled Extraction Sums of Squared Loadings. That text is from Field (2013) Discovering statistics using IBM SPSS, and this diagram accompanies it. I'm wondering Who is correct about whether it's possible to speak of eigenvalues after rotation? Would it matter if it was an oblique or orthogonal rotation? Why are the "initial eigenvalues" different from the "extraction sums of squared loadings"? Which is a better measure of total variation explained by the factors (or principal components or whatever method is used)? Should I say that the first four factors explain 50.317% of variation, or 40.477%?
eng_Latn
1,080
Diagonalization: Eigenvalues Vs Elementary Row Operations Using elementary row operations, a matrix A $\in \mathrm{R}^{n \times n} $ can be reduced to a Row-Reduced Echelon (RRE) form. Using the RRE form of A, the bases of Nullspace and Range can be obtained. The RRE form of A is a triangular (almost diagonal) form of A and is row-equivalent to A. If we are able to get a simple form of A, why do we need to get eigenvalues and diagonalize A ? What is the motivation for having an alternative method to simplify(diagonalize) A ? Why would one choose to diagonalize A using eigen values ?
What is the importance of eigenvalues/eigenvectors? What is the importance of eigenvalues/eigenvectors?
Recursive feature elimination and one-hot & dummy encoding? When using RFE in linear regression and logistic regression, do we one-hot encode the features (K levels and K dummy features) or dummy-encode the features (K levels and K-1 dummy features leaving one out). As per a comment by @Matthew Drury in an answer (URL below), one hot encoding is applied for a regularized linear model and for unregularized linear model dummy encoding. My doubt is what type of encoding when using RFE without any L1/L2 penalties. My understanding is since in RFE some features gets eliminated so if for a categorical variable with say 4 levels we do dummy encoding and have 3 features/levels in model &amp; RFE eliminated 1, we will only have 2 features/levels left and the interpretation of its coefficient would not make sense in absence of the one level which was left out as reference. Whereas if we have done one-hot encoding and RFE considers 2 features as important and eliminates other 2 then we can very well judge/interpret the coefficients or importance of 2 features RFE keeps. So question which type of encoding is needed to be done when using RFE with linear and logistic regression?
eng_Latn
1,081
Artificial Neural Network with continuous and binary variables I have a dataset with numerical (continuous) and categorical variables. I want to fit an artificial neural network. To do so, I have transformed my categorical variables by using the 1-of-k method, so I now have a bunch of binary variables. I am using the Neuralnet package in R to fit my ANN model. From a theoretical point of view: is there any problem by mixing the continuous and binary variables in the same model? 1-of-k method is explained below: The dat set [1] Sweden [2] India becomes Sweden India [1] 1 0 [2] 0 1
Neural Network: MLP for regression with 3 continuous features, 1 categorical I am starting to study Neural Network. I want to build a MLP where I will feed it: 3 features which are continuous one feature which is categorical (48 classes) How can I do this? Before adding the categorical feature, I was using 'relu' activation function and 'lbfgs' as a solver in MLPRegrssor in sklearn. I was wondering: how can I integrate this variable into the network? Should I do one-hot encoding? Should I keep relu as activation function or should I try sigmoid, softmax or something like that?
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,082
Use of this positive semi-definite matrix in optimization? We know that any matrix of the form $A^TA$ is positive semi-definite where $A^T$ is the transpose of $A$. Now how can we use this result in optimization? Edit: The importance of positive semi-definite matrices is almost clear, but my question is specific to $A^TA$. I have no idea of how we can use this matrix.
Why are symmetric positive definite (SPD) matrices so important? I know the definition of symmetric positive definite (SPD) matrix, but want to understand more. Why are they so important, intuitively? Here is what I know. What else? For a given data, Co-variance matrix is SPD. Co-variance matrix is a important metric, see this for intuitive explanation. The quadratic form $\frac 1 2 x^\top Ax-b^\top x +c$ is convex, if $A$ is SPD. Convexity is a nice property for a function that can make sure the local solution is global solution. For Convex problems, there are many good algorithms to solve, but not for non-covex problems. When $A$ is SPD, the optimization solution for the quadratic form $$\text{minimize}~~~ \frac 1 2 x^\top Ax-b^\top x +c$$ and the solution for linear system $$Ax=b$$ are the same. So we can run conversions between two classical problems. This is important because it enables us to use tricks discovered in one domain in the another. For example, we can use the conjugate gradient method to solve a linear system. There are many good algorithms (fast, numerical stable) that work better for an SPD matrix, such as Cholesky decomposition. EDIT: I am not trying ask the identities for SPD matrix, but the intuition behind the property to show the importance. For example, as mentioned by @Matthew Drury, if a matrix is SPD, Eigenvalues are all positive real numbers, but why all positive matters. @Matthew Drury had a great answer to flow and that is what I was looking for.
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,083
PCA returns the same pair of principal axes for completely different 2D datasets I noticed a (seemingly) weird behavior while using sklearn's on 2D datasets: I kept getting the same principal axes: $\pm\left(\begin{gathered}\sqrt{0.5}\\ \sqrt{0.5} \end{gathered} \right)$ and $\pm\left(\begin{gathered}\sqrt{0.5}\\ -\sqrt{0.5} \end{gathered} \right)$ (i.e. the lines $y = x$ and $y = -x$), even when I significantly changed the dataset. Just to be sure, I wrote a short script that demonstrates this behavior: (The script creates nonsense datasets from different distributions, and standardizes and performs PCA on each dataset.) import numpy as np from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.pipeline import make_pipeline dists_funcs = [np.random.chisquare, np.random.exponential, np.random.power, np.random.standard_gamma, np.random.weibull, np.random.rayleigh, np.random.pareto, np.random.poisson, np.random.standard_t] sqrt_of_half = 0.5 ** 0.5 n = 1000 for i, dist_func1 in enumerate(dists_funcs[:-1]): dist_func2 = dists_funcs[i + 1] X = np.array([(a, a * dist_func2(i + 2)) for a in dist_func1(i + 3, n)]) pipe = make_pipeline(StandardScaler(), PCA(n_components=2)) pipe.fit(X) pca = pipe.named_steps['pca'] for principal_axe in pca.components_: for z in principal_axe: if abs(abs(z) - sqrt_of_half) &gt; 1e-10: print(f'got {principal_axe} in {i}') print('done') Is this behavior guaranteed? Is there an intuitive explanation for it?
Does a correlation matrix of two variables always have the same eigenvectors? I perform Principal Component Analysis using two variables that are standardized. This is done by applying a SVD on the correlation matrix of the concerned variates. However, the SVD gives me the same eigenvector (weights) irrespective of what the two variables are. It's always [.70710678, .70710678]. I find this strange. Of course, the eigenvalues differ. My question is: How to interpret this? PS. I wanted to conduct a total least squares regression on two variables. My statistical programme does not provide TLS, but TLS luckily equals Principal Component Analysis, as far as I know. Hence my question. The question is not about TLS directly, but why I get the same eigenvectors irrespective of which variables I use (as long as they are exactly 2).
Local polynomial regression: Why does the variance increase monotonically in the degree? How can I show that the variance of local polynomial regression is increasing with the degree of the polynomial (Exercise 6.3 in Elements of Statistical Learning, second edition)? This question has been asked but the answer just states it follows easliy. More precisely, we consider $y_{i}=f(x_{i})+\epsilon_{i}$ with $\epsilon_{i}$ being independent with standard deviation $\sigma.$ The estimator is given by $$ \hat{f}(x_{0})=\left(\begin{array}{ccccc} 1 &amp; x_{0} &amp; x_{0}^{2} &amp; \dots &amp; x_{0}^{d}\end{array}\right)\left(\begin{array}{c} \alpha\\ \beta_{1}\\ \vdots\\ \beta_{d} \end{array}\right) $$ for $\alpha,\beta_{1},\dots,\beta_{d}$ solving the following weighted least squares problem $$ \min\left(y_{d}-\underbrace{\left(\begin{array}{ccccc} 1 &amp; x_{1} &amp; x_{1}^{2} &amp; \dots &amp; x_{1}^{d}\\ \vdots\\ 1 &amp; &amp; &amp; &amp; x_{n}^{d} \end{array}\right)}_{X}\left(\begin{array}{c} \alpha\\ \beta_{1}\\ \vdots\\ \beta_{d} \end{array}\right)\right)^{t}W\left(y-\left(\begin{array}{ccccc} 1 &amp; x_{1} &amp; x_{1}^{2} &amp; \dots &amp; x_{1}^{d}\\ \vdots\\ 1 &amp; &amp; &amp; &amp; x_{n}^{d} \end{array}\right)\left(\begin{array}{c} \alpha\\ \beta_{1}\\ \vdots\\ \beta_{d} \end{array}\right)\right) $$ for $W=\text{diag}\left(K(x_{0},x_{i})\right)_{i=1\dots n}$ with $K$ being the regression kernel. The solution to the weighted least squares problem can be written as $$ \left(\begin{array}{cccc} \alpha &amp; \beta_{1} &amp; \dots &amp; \beta_{d}\end{array}\right)=\left(X^{t}WX\right)^{-1}X^{t}WY. $$ Thus, for $l(x_{0})=\left(\begin{array}{ccccc} 1 &amp; x_{0} &amp; x_{0}^{2} &amp; \dots &amp; x_{0}^{d}\end{array}\right)\left(X^{t}WX\right)^{-1}X^{t}W$ we obtain $$ \hat{f}(x_{0})=l(x_{0})Y $$ implying that $$ \text{Var }\hat{f}(x_{0})=\sigma^{2}\left\Vert l(x_{0})\right\Vert ^{2}=\left(\begin{array}{ccccc} 1 &amp; x_{0} &amp; x_{0}^{2} &amp; \dots &amp; x_{0}^{d}\end{array}\right)\left(X^{t}WX\right)^{-1}X^{t}W^{2}X\left(X^{t}WX\right)^{-1}\left(\begin{array}{ccccc} 1 &amp; x_{0} &amp; x_{0}^{2} &amp; \dots &amp; x_{0}^{d}\end{array}\right)^{t}. $$ My approach: An induction using the formula for the inverse of a block matrix but I did not succeed. The paper by D. Ruppert and M. P. Wand derives an asymptotic expression for the variance for $n\rightarrow\infty$ in Theorem 4.1 but it is not clear that is increasing in the degree.
eng_Latn
1,084
Why does the p-value of a composite null hypothesis have a supremum attached it? I noticed that there is a definition of the p-value in my textbook. It is defined as the p-value of a composite null hypothesis and it says the following: I have no idea why it is written with a supremum. I've spent hours pondering this, does anyone have enough of a background to help me with this? Thank you!
Why is the p-value written with a supremum? I noticed that there is a definition of the pvalue in my textbook is defined says the following: I have no idea why it is written with a supremum. I've spent hours pondering this, does anyone have enough of a background to help me with this? Thank you!
How can top principal components retain the predictive power on a dependent variable (or even lead to better predictions)? Suppose I am running a regression $Y \sim X$. Why by selecting top $k$ principle components of $X$, does the model retain its predictive power on $Y$? I understand that from dimensionality-reduction/feature-selection point of view, if $v_1, v_2, ... v_k$ are the eigenvectors of covariance matrix of $X$ with top $k$ eigenvalues, then $Xv_1, Xv_2 ... Xv_k$ are top $k$ principal components with maximum variances. We can thereby reduce the number of features to $k$ and retain most of the predictive power, as I understand it. But why do top $k$ components retain the predictive power on $Y$? If we talk about a general OLS $Y \sim Z$, there is no reason to suggest that if feature $Z_i$ has maximum variance, then $Z_i$ has the most predictive power on $Y$. Update after seeing comments: I guess I have seen tons of examples of using PCA for dimensionality reduction. I have been assuming that means the dimensions we are left with have the most predictive power. Otherwise what's the point of dimensionality reduction?
eng_Latn
1,085
Inversion in $GF(2^8)$ AES I was wondering how you would calculate the S-box in AES. I found that you have to calculate the inverse of the polynomials in $GF(2^8)$. I found out that to calculate the inverse, you have to use the Extended Euclidean Algorithm. What I can't figure out is how do you apply this to a polynomial?
Multiplicative inverse in $\operatorname{GF}(2^8)$? I know how to do multiplication over ${\rm GF}(2^8)$: uint8_t gmul(uint8_t a, uint8_t b) { uint8_t p=0; uint8_t carry; int i; for(i=0;i&lt;8;i++) { if(b &amp; 1) p ^=a; carry = a &amp; 0x80; a = a&lt;&lt;1; if(carry) a^=0x1b; b = b&gt;&gt;1; } return p; } So, I tried to create a ${\rm GF}(2^8)$ multiplication table using this code. I've given below the values in the 3rd row of the table, but I don't think they're correct: 0 1 2 3 4 5 6 7 8 9 A B C D E F 0 1 2 0 2 4 6 8 A C E 10 12 14 16 18 1A 1C 1E 3 . . E F I don't know what went wrong. I built the table by multiplying the values in the first row with those in the first column. E.g. in the third row, I multiplied 2 &times; 0, 2 &times; 1, &hellip;, 2 &times; E, 2 &times; F. How can I create a multiplication table for arithmetic in ${\rm GF}(2^8)$? Also, how can I find the multiplicative inverse of a number in ${\rm GF}(2^8)$? For example, how can I determine that the inverse of 95 is 8A? I tried to do this using the multiplication table above, but when I took 9th row and the 5th column in the multiplication table I got 2D, not 8A.
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,086
How to extend Raster layer without pixel value data losing I got 2 different raster layers one of them black and white representing the river road. This river raster has pixel values stand for height. I want to extend the size of river in order to show, if the water increases which areas would be under the water But i don't know how to extent, Also the areas outside of the river raster(transparent in the picture) has no data values. Pixel values in extended areas should have pixel values same as nearest river pixel value. I guess i have to do it with raster calculator. But i can not find out how to code this.
Increasing Flood Plain I am doing a project where we are attempting to determine how many more buildings would flood if the FEMA flood plain increased by 1',2' and 5'. I have the elev of the flood plain rasters from lidar data, but I am having trouble figuring out how much more area will be inundated by the rises I described earlier. I'm using raster calc and I have a buffer for the flood plain that also has elevation data. Any thoughts?
Before running a ridge regression model, do I need to preform variable selection? I am currently constructing a model that uses last year's departmental information to predict employee churn for the current year. I have 55 features and 318 departments in my data set. A good portion of my independent variables are correlated, and because of this, I believe that performing a ridge regression on my data will lead to optimal predictions when I bring the model into production. I have studied ridge regression and understand that the lambda coefficient computed for a given predictor can minimize the effect that predictor has on the model to next to nothing. Does this mean that performing a ridge regression means I don't have to bother with variable selection? If I do need to perform a variable selection technique, would implementing a stepwise regression and then using those selected variables in my ridge regression be a valid approach at variable selection? I already posted this question on stack exchange but was informed that stack exchange was the better platform to ask statistical questions. I am sorry for the confusion.
eng_Latn
1,087
AES/CBC fixed Initial vector use-case I am using AES/CBC to encrypt my http cookie. I never encrypt the same cookie value twice so my understanding is I don't need to use a random initial vector - using a fixed initial vector is fine for this case. A random initial vector is needed if we may encrypt the same message more than one. Is my understanding correct for my case?
Is AES in CBC mode secure if a known and/or fixed IV is used? I have a need to encrypt credentials for a third-party app used by a secured internal app. Over on ITSec.SE, I was helpfully shown a scheme to encrypt the third-party credentials based on a hash of the credentials for the internal app. I picked AES as the encryption algorithm, but the problem is that the password-based scheme doesn't produce a "secret" IV. So, the IV must at least be known to an attacker (stored alongside the encrypted data). A hash value used for password verification could work, or I could just generate a pseudorandom byte array and drop it in the DB as a new column. I was further considering, for simplicity, to use a constant IV. What adverse effect will either of these choices have on the security of the encryption? Does AES depend, as many block ciphers do, on an unpredictable IV? Does it matter if the IV is stored as unencrypted plaintext?
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,088
Selection of an ARIMA model looking at the ACF and PACF I am using the table below as model selection tool (at least as starting point) Let's say that I choose a proper model according to the table and I get nice ACF and PACF out of it, but either my AR term or my MA term is pretty high, is there a way to simplify it? Note: I don't know if it is relevant, but I am using R.
How does ACF & PACF identify the order of MA and AR terms? It's been more than 2 years that I am working on different time series. I have read on many articles that ACF is used to identify order of MA term, and PACF for AR. There is a thumb rule that for MA, the lag where ACF shuts off suddenly is the order of MA and similarly for PACF and AR. Here is I followed from PennState Eberly College of Science. My Question is why is it so? For me even ACF can give AR term. I need explanation of thumb rule mentioned above. I am not able to understand thumb rule intuitively/mathematically that why - Identification of an AR model is often best done with the PACF. Identification of an MA model is often best done with the ACF rather than the PACF Please note:- I don't need how but "WHY". :)
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,089
how to choose the best logit model I have two logit regression models with different AIC. I'm using R. my first model has significant variables and AIC 192.7436. And my second model has 1 non-significant variables but with smaller AIC 192.4468. Which model is the best?
Should I remove non-significant variables from my regression model I have run a multiple linear regression using stepwise regression to select the best model, however the best model returned has a non-significant variable. When I remove this the AIC value goes up indicating the model without the significant variable is a worse fit. Should I remove the non-significant predictor or should I leave it in as it is a better model?
What's the relationship between initial eigenvalues and sums of squared loadings in factor analysis? On the one hand I read in a comment that: You can't speak of "eigenvalues" after rotation, even orthogonal rotation. Perhaps you mean sum of squared loadings for a principal component, after rotation. When rotation is oblique, this sum of squares tells nothing about the amount of variance explained, because components aren't orthogonal anymore. So, you shouldn't report any percentage of variance explained. On the other hand, I sometimes read in books people saying things like: The eigenvalues associated with each factor represent the variance explained by that particular factor; SPSS also displays the eigenvalue in terms of the percentage of variance explained (so factor 1 explains 31.696% of total variance). The first few factors explain relatively large amounts of variance (especially factor 1), whereas subsequent factors explain only small amounts of variance. SPSS then extracts all factors with eigenvalues greater than 1, which leaves us with four factors. The eigenvalues associated with these factors are again displayed (and the percentage of variance explained) in the columns labelled Extraction Sums of Squared Loadings. That text is from Field (2013) Discovering statistics using IBM SPSS, and this diagram accompanies it. I'm wondering Who is correct about whether it's possible to speak of eigenvalues after rotation? Would it matter if it was an oblique or orthogonal rotation? Why are the "initial eigenvalues" different from the "extraction sums of squared loadings"? Which is a better measure of total variation explained by the factors (or principal components or whatever method is used)? Should I say that the first four factors explain 50.317% of variation, or 40.477%?
eng_Latn
1,090
How to apply border radius after drawing a rectangle? Photoshop doesn't support to apply border radius after drawing of rectangle. I found and tried myself but seems to be not working. Is there any step is missed or is there any other idea to apply border radius after drawing a rectangle?
Photoshop CS6 Resize Rectangle with Rounded Corners This question is about Photoshop CS6. Hopefully the feature was added in this version. Before posting I looked up Google and and but didn't find an answer for CS6. So the question is: Is there a feature within CS6 or a script that allows to resize a vector element with rounded corners without using the Direct Selection Tool (white arrow tool) and not changing the border radius? What would be great would be something like simply using the transform tool or something like a grid resize tool.
Understanding Lasso Regression's sparsity geometrically Whenever someone writes about Lasso and Ridge Regression thy draw this diagram with the circle or with the diamond. In the case of the diamond (Lasso regression) it is then always stated that Lasso forces one of the coefficients to 0. Therefor it introduces sparsity. I understand it somehow, but whenever I see the diagram my doubts return. Why couldn't one just draw it like this: Obviously none of the coefficients is forced to zero in this case. Both can take number between -1 and 1. What am I missing? My drawing has to be wrong, but I don't get it why they always draw so that it hits $\beta_1=0$ Edit: Just found this quote: However, the lasso constraint has corners at each of the axes and so the ellipse will often intersect the constraint region at an axis Is that it? It will intersect often with the constraint region, but it doesn't have to? Can't wrap my head around it. I can only imagine that in higher dimensional cases hitting a corner becomes more likely or even inevitable.
eng_Latn
1,091
Statistics in properties different from statistics in attribute table (Arc) I have generic signed integer raster downloaded from the ORNL DAAC. It is a single band raster. I looked at the statistics in the properties of the raster i.e min, max, mean and std dev. I then calculated the statistics of the value column in the attribute table (by right clicking on the value header and choosing statistics). Why do both differ? I have tried this with other rasters as well and do not know the reason.
Explaining different Standard Deviation results from same data in ArcGIS Desktop and MS Excel? I have done some interpolation in Geostatistical Analyst in ArcGIS and I got the Standard Deviation (SD) a bit different from the SD that I calculated in Excel Spreadsheet. What algorithm does each package use? Why are the results different?
How can top principal components retain the predictive power on a dependent variable (or even lead to better predictions)? Suppose I am running a regression $Y \sim X$. Why by selecting top $k$ principle components of $X$, does the model retain its predictive power on $Y$? I understand that from dimensionality-reduction/feature-selection point of view, if $v_1, v_2, ... v_k$ are the eigenvectors of covariance matrix of $X$ with top $k$ eigenvalues, then $Xv_1, Xv_2 ... Xv_k$ are top $k$ principal components with maximum variances. We can thereby reduce the number of features to $k$ and retain most of the predictive power, as I understand it. But why do top $k$ components retain the predictive power on $Y$? If we talk about a general OLS $Y \sim Z$, there is no reason to suggest that if feature $Z_i$ has maximum variance, then $Z_i$ has the most predictive power on $Y$. Update after seeing comments: I guess I have seen tons of examples of using PCA for dimensionality reduction. I have been assuming that means the dimensions we are left with have the most predictive power. Otherwise what's the point of dimensionality reduction?
eng_Latn
1,092
SVD for PCA: Why would one standardize the data matrix? As explained in amoeba's beautiful answer one can use a singular value decomposition of the data matrix, $\mathbf{X} = \mathbf{USV}^\top$, to do a principal component analysis, if it is assumed that the data matrix $\mathbf{X}$ is centered. The principal directions will then be given as the columns of $\mathbf{V}$. Now, I have come across some statistical analysis where the data matrix was centered and standardized for the data to have unit variance before being used in a singular value decomposition. The analysis then proceeded in using the columns of $\mathbf{V}$ as principal directions. Can any of you think of a reason to do this standardization step before running the singular value decomposition? Is the result of such a procedure in terms of what PCA should tell you even meaningful?
PCA on correlation or covariance? What are the main differences between performing principal component analysis (PCA) on the correlation matrix and on the covariance matrix? Do they give the same results?
What does $b_i\mid b_{i+1}$ mean in this context? In the computational topology literature, the reduction algorithm for computing the Smith normal form of a boundary matrix uses the notation $b_j &gt; 1 \: \text{ and }\: b_j\mid b_{j+1}$ in the context of the diagonal elements of the Smith matrix. Can anyone give me an idea for what it means?
eng_Latn
1,093
Vandermonde Determinant with one column replaced How to calculate the A(n) Vandermonde matrix determinant if the column with powers n-1 is replaced with powers n?
Value of Vandermonde type determinant Let $x_1,...,x_n $ are distinct real numbers. Is it a formula for the Vandermonde type determinant $V(x_1, \cdots,x_n)$ whose last column is $x_1^k,\ \cdots,\ x_n^k$, where $k \geq n$, instead of $x_1^{n-1},\ \cdots,\ x_n^{n-1}$? Thanks
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,094
Sparse (Weighted) Adjacency Matrix & SparseArray: encoding multiple weights at a given edge
Why won't SparseArray let me store values with the head List?
The rank of Jacobian matrix at a point of affine variety is independent of choice of generators
eng_Latn
1,095
Reversed indices How do I type 1A in LaTeX as in the snippet below?
Left and right subscript / superscript I am trying to put two subscripts at the left and right of a character. For example, something like: _{t} p_{x} where p is in the middle. How do you do this?
Inverse of a diagonal matrix plus a constant I am looking for an efficient solution for inverting a matrix of the following form: $$D+aP$$ where $D$ is a (full-rank) diagonal matrix, $a$ is a constant, and $P$ is an all-ones matrix. gives a solution to the special case where all diagonal entries of $D$ are the same. The formula is also capable of providing a solution to this problem by setting appropriate $u$ and $v$, but it loses efficiency ($O(n^3)$ time complexity to compute) at high dimension. I am hoping to get a result in the same form so the space and time complexity are both $O(n)$.
eng_Latn
1,096
Dimension reduction for discrete qualitative and aggregated variables I know about PCA for multiple dimensions of continuous features but here is a problem I have some trouble to find a method for. I don't have a list of individual countries but rather a discrete classification of countries, and for each line (which I'll call segments) I have two output variables. Each line can represent one or several countries, as long as they fit all the classifications of this particular line. In case a line represents several countries, then the output variables are total values (in case of a number like total population) or pondered averages (in case of a percentage like employment rate). So my questions are: Does this particular way of organising data have a name? How can I isolate the most influential dimensions for, say, Employment rate? (aka discriminant analysis) How can I perform something similar to PCA with this discrete classification instead of continuous values as dimension values?
Categorical Principal Component Analysis - using Count, Continuous, Ordinal variables together I have some variables and I want to reduce their number for further analysis. I initially thought of combining them using factor analysis. But since the variables are of all kinds (rating, count, ordinal, continuous dollar amount) I am thinking about using CATPCA for it. However, I have some questions about the technique - Is CATPCA the right approach or Latent Class Factor Analysis would be the better one? By using the different kinds of variables together - how will the object scores be interpreted? For example - The rating question is on a 10 point scale but the dollar amount ranges from 0 to 1000s, so what will the object score obtained after CATPCA represent? And is it comparable to the Factor Score obtained from traditional PCA? For ordinal variables - Do all of them need to be in the same form, e.g. 1 meaning lowest and 10 meaning highest? Or can they be used in whatever form they are? Is it good to categorize the count and continuous variables based on their distribution and then treat then as ordinal/categorical variables? I would highly appreciate any help you can provide!
Nielsen & Chuang Exercise 2.2 - “Matrix representations: example” Reproduced from Exercise 2.2 of Nielsen &amp; Chuang's Quantum Computation and Quantum Information (10th Anniversary Edition): Suppose $V$ is a vector space with basis vectors $|0\rangle$ and $|1\rangle$, and $A$ is a linear operator from $V$ to $V$ such that $A|0\rangle = |1\rangle$ and $A|1\rangle = |0\rangle$. Give a matrix representation for $A$, with respect to the input basis $|0\rangle, |1\rangle$, and the output basis $|0\rangle, |1\rangle$. Find input and output bases which give rise to a different matrix representation of $A$. Note: This question is part of a series attempting to provide worked solutions to the exercises provided in the above book.
eng_Latn
1,097
Can someone explain the simple intution between Principal component 1, 2, ... etc in PCA?
Making sense of principal component analysis, eigenvectors & eigenvalues
Using principal component analysis (PCA) for feature selection
eng_Latn
1,098
Principal component analysis (PCA) vs. method of principal components for factor analysis (FA) I have just read as follows: One of the biggest reasons for the confusion between the two [principal component analysis (PCA) and factor analysis (FA)] has to do with the fact that one of the factor extraction methods in Factor Analysis is called "method of principal components". However, it's one thing to use PCA and another thing to use the method of principal components in FA. The names may be similar, but there are significant differences. The former is an independent analytical method while the latter is merely a tool for factor extraction" Can anybody explain a little more about this, especially about PCA and method of principal components in FA?
Best factor extraction methods in factor analysis SPSS offers several methods of factor extraction: Principal components (which isn't factor analysis at all) Unweighted least squares Generalized least squares Maximum Likelihood Principal Axis Alpha factoring Image factoring Ignoring the first method, which isn't factor analysis (but principal component analysis, PCA), which of these methods is "the best"? What are the relative advantages of the different methods? And basically, how would I choose which one to use? Additional question: should one obtain similar results from all 6 methods?
Determining best fitting curve fitting function out of linear, exponential, and logarithmic functions Context: From a question on Mathematics Stack Exchange , someone has a set of $x-y$ points, and wants to fit a curve to it, linear, exponential or logarithmic. The usual method is to start by choosing one of these (which specifies the model), and then do the statistical calculations. But what is really wanted is to find the 'best' curve out of linear, exponential or logarithmic. Ostensibly, one could try all three, and choose the best fitted curve of the three according to the best correlation coefficient. But somehow I'm feeling this is not quite kosher. The generally accepted method is to pick your model first, one of those three (or some other link function), then from the data calculate the coefficients. And post facto picking the best of all is cherry picking. But to me whether you're determining a function or coefficients from the data it is still the same thing, your procedure is discovering the best...thing (let's say that which function is -also- another coefficient o be discovered). Questions: Is it appropriate to choose the best fitting model out of linear, exponential, and logarithmic models, based on a comparison of fit statistics? If so, what is the most appropriate way to do this? If regression helps find parameters (coefficients) in a function, why can't there be a discrete parameter to choose which of three curve families the best would come from?
eng_Latn
1,099