markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
hash
stringlengths
32
32
Some Formal Basics (skip if you just want code examples) To set the context, we here briefly describe statistical hypothesis testing. Informally, one defines a hypothesis on a certain domain and then uses a statistical test to check whether this hypothesis is true. Formally, the goal is to reject a so-called null-hypothesis $H_0$, which is the complement of an alternative-hypothesis $H_A$. To distinguish the hypotheses, a test statistic is computed on sample data. Since sample data is finite, this corresponds to sampling the true distribution of the test statistic. There are two different distributions of the test statistic -- one for each hypothesis. The null-distribution corresponds to test statistic samples under the model that $H_0$ holds; the alternative-distribution corresponds to test statistic samples under the model that $H_A$ holds. In practice, one tries to compute the quantile of the test statistic in the null-distribution. In case the test statistic is in a high quantile, i.e. it is unlikely that the null-distribution has generated the test statistic -- the null-hypothesis $H_0$ is rejected. There are two different kinds of errors in hypothesis testing: A type I error is made when $H_0: p=q$ is wrongly rejected. That is, the test says that the samples are from different distributions when they are not. A type II error is made when $H_A: p\neq q$ is wrongly accepted. That is, the test says that the samples are from the same distribution when they are not. A so-called consistent test achieves zero type II error for a fixed type I error. To decide whether to reject $H_0$, one could set a threshold, say at the $95\%$ quantile of the null-distribution, and reject $H_0$ when the test statistic lies below that threshold. This means that the chance that the samples were generated under $H_0$ are $5\%$. We call this number the test power $\alpha$ (in this case $\alpha=0.05$). It is an upper bound on the probability for a type I error. An alternative way is simply to compute the quantile of the test statistic in the null-distribution, the so-called p-value, and to compare the p-value against a desired test power, say $\alpha=0.05$, by hand. The advantage of the second method is that one not only gets a binary answer, but also an upper bound on the type I error. In order to construct a two-sample test, the null-distribution of the test statistic has to be approximated. One way of doing this for any two-sample test is called bootstrapping, or the permutation test, where samples from both sources are mixed and permuted repeatedly and the test statistic is computed for every of those configurations. While this method works for every statistical hypothesis test, it might be very costly because the test statistic has to be re-computed many times. For many test statistics, there are more sophisticated methods of approximating the null distribution. Base class for Hypothesis Testing Shogun implements statistical testing in the abstract class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CHypothesisTest.html">CHypothesisTest</a>. All implemented methods will work with this interface at their most basic level. This class offers methods to compute the implemented test statistic, compute p-values for a given value of the test statistic, compute a test threshold for a given p-value, sampling the null distribution, i.e. perform the permutation test or bootstrappig of the null-distribution, and performing a full two-sample test, and either returning a p-value or a binary rejection decision. This method is most useful in practice. Note that, depending on the used test statistic, it might be faster to call this than to compute threshold and test statistic seperately with the above methods. There are special subclasses for testing two distributions against each other (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CTwoSampleTest.html">CTwoSampleTest</a>, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CIndependenceTest.html">CIndependenceTest</a>), kernel two-sample testing (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKernelTwoSampleTest.html">CKernelTwoSampleTest</a>), and kernel independence testing (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKernelIndependenceTest.html">CKernelIndependenceTest</a>), which however mostly differ in internals and constructors. Kernel Two-Sample Testing with the Maximum Mean Discrepancy $\DeclareMathOperator{\mmd}{MMD}$ An important class of hypothesis tests are the two-sample tests. In two-sample testing, one tries to find out whether two sets of samples come from different distributions. Given two probability distributions $p,q$ on some arbritary domains $\mathcal{X}, \mathcal{Y}$ respectively, and i.i.d. samples $X={x_i}{i=1}^m\subseteq \mathcal{X}\sim p$ and $Y={y_i}{i=1}^n\subseteq \mathcal{Y}\sim p$, the two sample test distinguishes the hypothesises \begin{align} H_0: p=q\ H_A: p\neq q \end{align} In order to solve this problem, it is desirable to have a criterion than takes a positive unique value if $p\neq q$, and zero if and only if $p=q$. The so called Maximum Mean Discrepancy (MMD), has this property and allows to distinguish any two probability distributions, if used in a reproducing kernel Hilbert space (RKHS). It is the distance of the mean embeddings $\mu_p, \mu_q$ of the distributions $p,q$ in such a RKHS $\mathcal{F}$ -- which can also be expressed in terms of expectation of kernel functions, i.e. \begin{align} \mmd[\mathcal{F},p,q]&=||\mu_p-\mu_q||\mathcal{F}^2\ &=\textbf{E}{x,x'}\left[ k(x,x')\right]- 2\textbf{E}{x,y}\left[ k(x,y)\right] +\textbf{E}{y,y'}\left[ k(y,y')\right] \end{align} Note that this formulation does not assume any form of the input data, we just need a kernel function whose feature space is a RKHS, see [2, Section 2] for details. This has the consequence that in Shogun, we can do tests on any type of data (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDenseFeatures.html">CDenseFeatures</a>, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSparseFeatures.html">CSparseFeatures</a>, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStringFeatures.html">CStringFeatures</a>, etc), as long as we or you provide a positive definite kernel function under the interface of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKernel.html">CKernel</a>. We here only describe how to use the MMD for two-sample testing. Shogun offers two types of test statistic based on the MMD, one with quadratic costs both in time and space, and one with linear time and constant space costs. Both come in different versions and with different methods how to approximate the null-distribution in order to construct a two-sample test. Running Example Data. Gaussian vs. Laplace In order to illustrate kernel two-sample testing with Shogun, we use a couple of toy distributions. The first dataset we consider is the 1D Standard Gaussian $p(x)=\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\frac{(x-\mu)^2}{\sigma^2}\right)$ with mean $\mu$ and variance $\sigma^2$, which is compared against the 1D Laplace distribution $p(x)=\frac{1}{2b}\exp\left(-\frac{|x-\mu|}{b}\right)$ with the same mean $\mu$ and variance $2b^2$. In order to increase difficulty, we set $b=\sqrt{\frac{1}{2}}$, which means that $2b^2=\sigma^2=1$.
# use scipy for generating samples from scipy.stats import norm, laplace def sample_gaussian_vs_laplace(n=220, mu=0.0, sigma2=1, b=sqrt(0.5)): # sample from both distributions X=norm.rvs(size=n, loc=mu, scale=sigma2) Y=laplace.rvs(size=n, loc=mu, scale=b) return X,Y mu=0.0 sigma2=1 b=sqrt(0.5) n=220 X,Y=sample_gaussian_vs_laplace(n, mu, sigma2, b) # plot both densities and histograms figure(figsize=(18,5)) suptitle("Gaussian vs. Laplace") subplot(121) Xs=linspace(-2, 2, 500) plot(Xs, norm.pdf(Xs, loc=mu, scale=sigma2)) plot(Xs, laplace.pdf(Xs, loc=mu, scale=b)) title("Densities") xlabel("$x$") ylabel("$p(x)$") _=legend([ 'Gaussian','Laplace']) subplot(122) hist(X, alpha=0.5) xlim([-5,5]) ylim([0,100]) hist(Y,alpha=0.5) xlim([-5,5]) ylim([0,100]) legend(["Gaussian", "Laplace"]) _=title('Histograms')
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
9d6a1463939fdeae47bd97f05d6dfce0
Now how to compare these two sets of samples? Clearly, a t-test would be a bad idea since it basically compares mean and variance of $X$ and $Y$. But we set that to be equal. By chance, the estimates of these statistics might differ, but that is unlikely to be significant. Thus, we have to look at higher order statistics of the samples. In fact, kernel two-sample tests look at all (infinitely many) higher order moments.
print "Gaussian vs. Laplace" print "Sample means: %.2f vs %.2f" % (mean(X), mean(Y)) print "Samples variances: %.2f vs %.2f" % (var(X), var(Y))
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
c2c19421c1125354415d922d0e417592
Quadratic Time MMD We now describe the quadratic time MMD, as described in [1, Lemma 6], which is implemented in Shogun. All methods in this section are implemented in <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CQuadraticTimeMMD.html">CQuadraticTimeMMD</a>, which accepts any type of features in Shogun, and use it on the above toy problem. An unbiased estimate for the MMD expression above can be obtained by estimating expected values with averaging over independent samples $$ \mmd_u[\mathcal{F},X,Y]^2=\frac{1}{m(m-1)}\sum_{i=1}^m\sum_{j\neq i}^mk(x_i,x_j) + \frac{1}{n(n-1)}\sum_{i=1}^n\sum_{j\neq i}^nk(y_i,y_j)-\frac{2}{mn}\sum_{i=1}^m\sum_{j\neq i}^nk(x_i,y_j) $$ A biased estimate would be $$ \mmd_b[\mathcal{F},X,Y]^2=\frac{1}{m^2}\sum_{i=1}^m\sum_{j=1}^mk(x_i,x_j) + \frac{1}{n^ 2}\sum_{i=1}^n\sum_{j=1}^nk(y_i,y_j)-\frac{2}{mn}\sum_{i=1}^m\sum_{j\neq i}^nk(x_i,y_j) .$$ Computing the test statistic using <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CQuadraticTimeMMD.html">CQuadraticTimeMMD</a> does exactly this, where it is possible to choose between the two above expressions. Note that some methods for approximating the null-distribution only work with one of both types. Both statistics' computational costs are quadratic both in time and space. Note that the method returns $m\mmd_b[\mathcal{F},X,Y]^2$ since null distribution approximations work on $m$ times null distribution. Here is how the test statistic itself is computed.
# turn data into Shogun representation (columns vectors) feat_p=RealFeatures(X.reshape(1,len(X))) feat_q=RealFeatures(Y.reshape(1,len(Y))) # choose kernel for testing. Here: Gaussian kernel_width=1 kernel=GaussianKernel(10, kernel_width) # create mmd instance of test-statistic mmd=QuadraticTimeMMD(kernel, feat_p, feat_q) # compute biased and unbiased test statistic (default is unbiased) mmd.set_statistic_type(BIASED) biased_statistic=mmd.compute_statistic() mmd.set_statistic_type(UNBIASED) unbiased_statistic=mmd.compute_statistic() print "%d x MMD_b[X,Y]^2=%.2f" % (len(X), biased_statistic) print "%d x MMD_u[X,Y]^2=%.2f" % (len(X), unbiased_statistic)
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
bfafc6027d7ba81d1d8c6fba700d38a0
Any sub-class of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CHypothesisTest.html">CHypothesisTest</a> can compute approximate the null distribution using permutation/bootstrapping. This way always is guaranteed to produce consistent results, however, it might take a long time as for each sample of the null distribution, the test statistic has to be computed for a different permutation of the data. Note that each of the below calls samples from the null distribution. It is wise to choose one method in practice. Also note that we set the number of samples from the null distribution to a low value to reduce runtime. Choose larger in practice, it is in fact good to plot the samples.
# this is not necessary as bootstrapping is the default mmd.set_null_approximation_method(PERMUTATION) mmd.set_statistic_type(UNBIASED) # to reduce runtime, should be larger practice mmd.set_num_null_samples(100) # now show a couple of ways to compute the test # compute p-value for computed test statistic p_value=mmd.compute_p_value(unbiased_statistic) print "P-value of MMD value %.2f is %.2f" % (unbiased_statistic, p_value) # compute threshold for rejecting H_0 for a given test power alpha=0.05 threshold=mmd.compute_threshold(alpha) print "Threshold for rejecting H0 with a test power of %.2f is %.2f" % (alpha, threshold) # performing the test by hand given the above results, note that those two are equivalent if unbiased_statistic>threshold: print "H0 is rejected with confidence %.2f" % alpha if p_value<alpha: print "H0 is rejected with confidence %.2f" % alpha # or, compute the full two-sample test directly # fixed test power, binary decision binary_test_result=mmd.perform_test(alpha) if binary_test_result: print "H0 is rejected with confidence %.2f" % alpha significance_test_result=mmd.perform_test() print "P-value of MMD test is %.2f" % significance_test_result if significance_test_result<alpha: print "H0 is rejected with confidence %.2f" % alpha
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
65356a7c6a07a43652f33efdee3a5b60
Precomputing Kernel Matrices Bootstrapping re-computes the test statistic for a bunch of permutations of the test data. For kernel two-sample test methods, in particular those of the MMD class, this means that only the joint kernel matrix of $X$ and $Y$ needs to be permuted. Thus, we can precompute the matrix, which gives a significant performance boost. Note that this is only possible if the matrix can be stored in memory. Below, we use Shogun's <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CCustomKernel.html">CCustomKernel</a> class, which allows to precompute a kernel matrix (multithreaded) of a given kernel and store it in memory. Instances of this class can then be used as if they were standard kernels.
# precompute kernel to be faster for null sampling p_and_q=mmd.get_p_and_q() kernel.init(p_and_q, p_and_q); precomputed_kernel=CustomKernel(kernel); mmd.set_kernel(precomputed_kernel); # increase number of iterations since should be faster now mmd.set_num_null_samples(500); p_value_boot=mmd.perform_test(); print "P-value of MMD test is %.2f" % p_value_boot
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
7836611b816b40d52e029c40f3747482
Now let us visualise distribution of MMD statistic under $H_0:p=q$ and $H_A:p\neq q$. Sample both null and alternative distribution for that. Use the interface of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CTwoSampleTest.html">CTwoSampleTest</a> to sample from the null distribution (permutations, re-computing of test statistic is done internally). For the alternative distribution, compute the test statistic for a new sample set of $X$ and $Y$ in a loop. Note that the latter is expensive, as the kernel cannot be precomputed, and infinite data is needed. Though it is not needed in practice but only for illustrational purposes here.
num_samples=500 # sample null distribution mmd.set_num_null_samples(num_samples) null_samples=mmd.sample_null() # sample alternative distribution, generate new data for that alt_samples=zeros(num_samples) for i in range(num_samples): X=norm.rvs(size=n, loc=mu, scale=sigma2) Y=laplace.rvs(size=n, loc=mu, scale=b) feat_p=RealFeatures(reshape(X, (1,len(X)))) feat_q=RealFeatures(reshape(Y, (1,len(Y)))) mmd=QuadraticTimeMMD(kernel, feat_p, feat_q) alt_samples[i]=mmd.compute_statistic()
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
312bb82146acb5b5e68522df3b0e81a8
Null and Alternative Distribution Illustrated Visualise both distributions, $H_0:p=q$ is rejected if a sample from the alternative distribution is larger than the $(1-\alpha)$-quantil of the null distribution. See [1] for more details on their forms. From the visualisations, we can read off the test's type I and type II error: type I error is the area of the null distribution being right of the threshold type II error is the area of the alternative distribution being left from the threshold
def plot_alt_vs_null(alt_samples, null_samples, alpha): figure(figsize=(18,5)) subplot(131) hist(null_samples, 50, color='blue') title('Null distribution') subplot(132) title('Alternative distribution') hist(alt_samples, 50, color='green') subplot(133) hist(null_samples, 50, color='blue') hist(alt_samples, 50, color='green', alpha=0.5) title('Null and alternative distriution') # find (1-alpha) element of null distribution null_samples_sorted=sort(null_samples) quantile_idx=int(num_samples*(1-alpha)) quantile=null_samples_sorted[quantile_idx] axvline(x=quantile, ymin=0, ymax=100, color='red', label=str(int(round((1-alpha)*100))) + '% quantile of null') _=legend() plot_alt_vs_null(alt_samples, null_samples, alpha)
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
c1f64cb4b5fa25dac803d5094da39ec5
Different Ways to Approximate the Null Distribution for the Quadratic Time MMD As already mentioned, bootstrapping the null distribution is expensive business. There exist a couple of methods that are more sophisticated and either allow very fast approximations without guarantees or reasonably fast approximations that are consistent. We present a selection from [2], which are implemented in Shogun. The first one is a spectral method that is based around the Eigenspectrum of the kernel matrix of the joint samples. It is faster than bootstrapping while being a consistent test. Effectively, the null-distribution of the biased statistic is sampled, but in a more efficient way than the bootstrapping approach. The converges as $$ m\mmd^2_b \rightarrow \sum_{l=1}^\infty \lambda_l z_l^2 $$ where $z_l\sim \mathcal{N}(0,2)$ are i.i.d. normal samples and $\lambda_l$ are Eigenvalues of expression 2 in [2], which can be empirically estimated by $\hat\lambda_l=\frac{1}{m}\nu_l$ where $\nu_l$ are the Eigenvalues of the centred kernel matrix of the joint samples $X$ and $Y$. The distribution above can be easily sampled. Shogun's implementation has two parameters: Number of samples from null-distribution. The more, the more accurate. As a rule of thumb, use 250. Number of Eigenvalues of the Eigen-decomposition of the kernel matrix to use. The more, the better the results get. However, the Eigen-spectrum of the joint gram matrix usually decreases very fast. Plotting the Spectrum can help. See [2] for details. If the kernel matrices are diagonal dominant, this method is likely to fail. For that and more details, see the original paper. Computational costs are much lower than bootstrapping, which is the only consistent alternative. Since Eigenvalues of the gram matrix has to be computed, costs are in $\mathcal{O}(m^3)$. Below, we illustrate how to sample the null distribution and perform two-sample testing with the Spectrum approximation in the class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CQuadraticTimeMMD.html">CQuadraticTimeMMD</a>. This method only works with the biased statistic.
# optional: plot spectrum of joint kernel matrix from numpy.linalg import eig # get joint feature object and compute kernel matrix and its spectrum feats_p_q=mmd.get_p_and_q() mmd.get_kernel().init(feats_p_q, feats_p_q) K=mmd.get_kernel().get_kernel_matrix() w,_=eig(K) # visualise K and its spectrum (only up to threshold) figure(figsize=(18,5)) subplot(121) imshow(K, interpolation="nearest") title("Kernel matrix K of joint data $X$ and $Y$") subplot(122) thresh=0.1 plot(w[:len(w[w>thresh])]) _=title("Eigenspectrum of K until component %d" % len(w[w>thresh]))
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
3c171c8854f80f95fc5c9d105e20b24e
The above plot of the Eigenspectrum shows that the Eigenvalues are decaying extremely fast. We choose the number for the approximation such that all Eigenvalues bigger than some threshold are used. In this case, we will not loose a lot of accuracy while gaining a significant speedup. For slower decaying Eigenspectrums, this approximation might be more expensive.
# threshold for eigenspectrum thresh=0.1 # compute number of eigenvalues to use num_eigen=len(w[w>thresh]) # finally, do the test, use biased statistic mmd.set_statistic_type(BIASED) #tell Shogun to use spectrum approximation mmd.set_null_approximation_method(MMD2_SPECTRUM) mmd.set_num_eigenvalues_spectrum(num_eigen) mmd.set_num_samples_spectrum(num_samples) # the usual test interface p_value_spectrum=mmd.perform_test() print "Spectrum: P-value of MMD test is %.2f" % p_value_spectrum # compare with ground truth bootstrapping mmd.set_null_approximation_method(PERMUTATION) mmd.set_num_null_samples(num_samples) p_value_boot=mmd.perform_test() print "Bootstrapping: P-value of MMD test is %.2f" % p_value_spectrum
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
60c17bee37ad988fc9527333b14b4d7b
The Gamma Moment Matching Approximation and Type I errors $\DeclareMathOperator{\var}{var}$ Another method for approximating the null-distribution is by matching the first two moments of a <a href="http://en.wikipedia.org/wiki/Gamma_distribution">Gamma distribution</a> and then compute the quantiles of that. This does not result in a consistent test, but usually also gives good results while being very fast. However, there are distributions where the method fail. Therefore, the type I error should always be monitored. Described in [2]. It uses $$ m\mmd_b(Z) \sim \frac{x^{\alpha-1}\exp(-\frac{x}{\beta})}{\beta^\alpha \Gamma(\alpha)} $$ where $$ \alpha=\frac{(\textbf{E}(\text{MMD}_b(Z)))^2}{\var(\text{MMD}_b(Z))} \qquad \text{and} \qquad \beta=\frac{m \var(\text{MMD}_b(Z))}{(\textbf{E}(\text{MMD}_b(Z)))^2} $$ Then, any threshold and p-value can be computed using the gamma distribution in the above expression. Computational costs are in $\mathcal{O}(m^2)$. Note that the test is parameter free. It only works with the biased statistic.
# tell Shogun to use gamma approximation mmd.set_null_approximation_method(MMD2_GAMMA) # the usual test interface p_value_gamma=mmd.perform_test() print "Gamma: P-value of MMD test is %.2f" % p_value_gamma # compare with ground truth bootstrapping mmd.set_null_approximation_method(PERMUTATION) p_value_boot=mmd.perform_test() print "Bootstrapping: P-value of MMD test is %.2f" % p_value_spectrum
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
53dbf28415ff9045ed13068798df2fa8
As we can see, the above example was kind of unfortunate, as the approximation fails badly. We check the type I error to verify that. This works similar to sampling the alternative distribution: re-sample data (assuming infinite amounts), perform the test and average results. Below we compare type I errors or all methods for approximating the null distribution. This will take a while.
# type I error is false alarm, therefore sample data under H0 num_trials=50 rejections_gamma=zeros(num_trials) rejections_spectrum=zeros(num_trials) rejections_bootstrap=zeros(num_trials) num_samples=50 alpha=0.05 for i in range(num_trials): X=norm.rvs(size=n, loc=mu, scale=sigma2) Y=laplace.rvs(size=n, loc=mu, scale=b) # simulate H0 via merging samples before computing the Z=hstack((X,Y)) X=Z[:len(X)] Y=Z[len(X):] feat_p=RealFeatures(reshape(X, (1,len(X)))) feat_q=RealFeatures(reshape(Y, (1,len(Y)))) # gamma mmd=QuadraticTimeMMD(kernel, feat_p, feat_q) mmd.set_null_approximation_method(MMD2_GAMMA) mmd.set_statistic_type(BIASED) rejections_gamma[i]=mmd.perform_test(alpha) # spectrum mmd=QuadraticTimeMMD(kernel, feat_p, feat_q) mmd.set_null_approximation_method(MMD2_SPECTRUM) mmd.set_num_eigenvalues_spectrum(num_eigen) mmd.set_num_samples_spectrum(num_samples) mmd.set_statistic_type(BIASED) rejections_spectrum[i]=mmd.perform_test(alpha) # bootstrap (precompute kernel) mmd=QuadraticTimeMMD(kernel, feat_p, feat_q) p_and_q=mmd.get_p_and_q() kernel.init(p_and_q, p_and_q) precomputed_kernel=CustomKernel(kernel) mmd.set_kernel(precomputed_kernel) mmd.set_null_approximation_method(PERMUTATION) mmd.set_num_null_samples(num_samples) mmd.set_statistic_type(BIASED) rejections_bootstrap[i]=mmd.perform_test(alpha) convergence_gamma=cumsum(rejections_gamma)/(arange(num_trials)+1) convergence_spectrum=cumsum(rejections_spectrum)/(arange(num_trials)+1) convergence_bootstrap=cumsum(rejections_bootstrap)/(arange(num_trials)+1) print "Average rejection rate of H0 for Gamma is %.2f" % mean(convergence_gamma) print "Average rejection rate of H0 for Spectrum is %.2f" % mean(convergence_spectrum) print "Average rejection rate of H0 for Bootstrapping is %.2f" % mean(rejections_bootstrap)
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
6f2667449e00f24d303c927d39ec61ac
We see that Gamma basically never rejects, which is inline with the fact that the p-value was massively overestimated above. Note that for the other tests, the p-value is also not at its desired value, but this is due to the low number of samples/repetitions in the above code. Increasing them leads to consistent type I errors. Linear Time MMD on Gaussian Blobs So far, we basically had to precompute the kernel matrix for reasonable runtimes. This is not possible for more than a few thousand points. The linear time MMD statistic, implemented in <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html">CLinearTimeMMD</a> can help here, as it accepts data under the streaming interface <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStreamingFeatures.html">CStreamingFeatures</a>, which deliver data one-by-one. And it can do more cool things, for example choose the best single (or combined) kernel for you. But we need a more fancy dataset for that to show its power. We will use one of Shogun's streaming based data generator, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianBlobsDataGenerator.html">CGaussianBlobsDataGenerator</a> for that. This dataset consists of two distributions which are a grid of Gaussians where in one of them, the Gaussians are stretched and rotated. This dataset is regarded as challenging for two-sample testing.
# paramters of dataset m=20000 distance=10 stretch=5 num_blobs=3 angle=pi/4 # these are streaming features gen_p=GaussianBlobsDataGenerator(num_blobs, distance, 1, 0) gen_q=GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle) # stream some data and plot num_plot=1000 features=gen_p.get_streamed_features(num_plot) features=features.create_merged_copy(gen_q.get_streamed_features(num_plot)) data=features.get_feature_matrix() figure(figsize=(18,5)) subplot(121) grid(True) plot(data[0][0:num_plot], data[1][0:num_plot], 'r.', label='$x$') title('$X\sim p$') subplot(122) grid(True) plot(data[0][num_plot+1:2*num_plot], data[1][num_plot+1:2*num_plot], 'b.', label='$x$', alpha=0.5) _=title('$Y\sim q$')
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
3ec73d1199a39c5fe2533d851f1ec54c
We now describe the linear time MMD, as described in [1, Section 6], which is implemented in Shogun. A fast, unbiased estimate for the original MMD expression which still uses all available data can be obtained by dividing data into two parts and then compute $$ \mmd_l^2[\mathcal{F},X,Y]=\frac{1}{m_2}\sum_{i=1}^{m_2} k(x_{2i},x_{2i+1})+k(y_{2i},y_{2i+1})-k(x_{2i},y_{2i+1})- k(x_{2i+1},y_{2i}) $$ where $ m_2=\lfloor\frac{m}{2} \rfloor$. While the above expression assumes that $m$ data are available from each distribution, the statistic in general works in an online setting where features are obtained one by one. Since only pairs of four points are considered at once, this allows to compute it on data streams. In addition, the computational costs are linear in the number of samples that are considered from each distribution. These two properties make the linear time MMD very applicable for large scale two-sample tests. In theory, any number of samples can be processed -- time is the only limiting factor. We begin by illustrating how to pass data to <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html">CLinearTimeMMD</a>. In order not to loose performance due to overhead, it is possible to specify a block size for the data stream.
block_size=100 # if features are already under the streaming interface, just pass them mmd=LinearTimeMMD(kernel, gen_p, gen_q, m, block_size) # compute an unbiased estimate in linear time statistic=mmd.compute_statistic() print "MMD_l[X,Y]^2=%.2f" % statistic # note: due to the streaming nature, successive calls of compute statistic use different data # and produce different results. Data cannot be stored in memory for _ in range(5): print "MMD_l[X,Y]^2=%.2f" % mmd.compute_statistic()
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
89bbac18ca0cd6d6f1261717df8246a8
Sometimes, one might want to use <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html">CLinearTimeMMD</a> with data that is stored in memory. In that case, it is easy to data in the form of for example <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStreamingDenseFeatures.html">CStreamingDenseFeatures</a> into <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDenseFeatures.html">CDenseFeatures</a>.
# data source gen_p=GaussianBlobsDataGenerator(num_blobs, distance, 1, 0) gen_q=GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle) # retreive some points, store them as non-streaming data in memory data_p=gen_p.get_streamed_features(100) data_q=gen_q.get_streamed_features(data_p.get_num_vectors()) print "Number of data is %d" % data_p.get_num_vectors() # cast data in memory as streaming features again (which now stream from the in-memory data) streaming_p=StreamingRealFeatures(data_p) streaming_q=StreamingRealFeatures(data_q) # it is important to start the internal parser to avoid deadlocks streaming_p.start_parser() streaming_q.start_parser() # example to create mmd (note that m can be maximum the number of data in memory) mmd=LinearTimeMMD(GaussianKernel(10,1), streaming_p, streaming_q, data_p.get_num_vectors(), 1) print "Linear time MMD statistic: %.2f" % mmd.compute_statistic()
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
ac71bda9ad697284488131efb2b2639b
The Gaussian Approximation to the Null Distribution As for any two-sample test in Shogun, bootstrapping can be used to approximate the null distribution. This results in a consistent, but slow test. The number of samples to take is the only parameter. Note that since <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html">CLinearTimeMMD</a> operates on streaming features, new data is taken from the stream in every iteration. Bootstrapping is not really necessary since there exists a fast and consistent estimate of the null-distribution. However, to ensure that any approximation is accurate, it should always be checked against bootstrapping at least once. Since both the null- and the alternative distribution of the linear time MMD are Gaussian with equal variance (and different mean), it is possible to approximate the null-distribution by using a linear time estimate for this variance. An unbiased, linear time estimator for $$ \var[\mmd_l^2[\mathcal{F},X,Y]] $$ can simply be computed by computing the empirical variance of $$ k(x_{2i},x_{2i+1})+k(y_{2i},y_{2i+1})-k(x_{2i},y_{2i+1})-k(x_{2i+1},y_{2i}) \qquad (1\leq i\leq m_2) $$ A normal distribution with this variance and zero mean can then be used as an approximation for the null-distribution. This results in a consistent test and is very fast. However, note that it is an approximation and its accuracy depends on the underlying data distributions. It is a good idea to compare to the bootstrapping approach first to determine an appropriate number of samples to use. This number is usually in the tens of thousands. <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html">CLinearTimeMMD</a> allows to approximate the null distribution in the same pass as computing the statistic itself (in linear time). This should always be used in practice since seperate calls of computing statistic and p-value will operator on different data from the stream. Below, we compute the test on a large amount of data (impossible to perform quadratic time MMD for this one as the kernel matrices cannot be stored in memory)
mmd=LinearTimeMMD(kernel, gen_p, gen_q, m, block_size) print "m=%d samples from p and q" % m print "Binary test result is: " + ("Rejection" if mmd.perform_test(alpha) else "No rejection") print "P-value test result is %.2f" % mmd.perform_test()
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
9a3b04ca72a0f6080aa54a412029f729
Kernel Selection for the MMD -- Overview $\DeclareMathOperator{\argmin}{arg\,min} \DeclareMathOperator{\argmax}{arg\,max}$ Now which kernel do we actually use for our tests? So far, we just plugged in arbritary ones. However, for kernel two-sample testing, it is possible to do something more clever. Shogun's kernel selection methods for MMD based two-sample tests are all based around [3, 4]. For the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html">CLinearTimeMMD</a>, [3] describes a way of selecting the optimal kernel in the sense that the test's type II error is minimised. For the linear time MMD, this is the method of choice. It is done via maximising the MMD statistic divided by its standard deviation and it is possible for single kernels and also for convex combinations of them. For the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CQuadraticTimeMMD.html">CQuadraticTimeMMD</a>, the best method in literature is choosing the kernel that maximised the MMD statistic [4]. For convex combinations of kernels, this can be achieved via a $L2$ norm constraint. A detailed comparison of all methods on numerous datasets can be found in [5]. MMD Kernel selection in Shogun always involves an implementation of the base class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelection.html">CMMDKernelSelection</a>, which defines the interface for kernel selection. If combinations of kernel should be considered, there is a sub-class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionComb.html">CMMDKernelSelectionComb</a>. In addition, it involves setting up a number of baseline kernels $\mathcal{K}$ to choose from/combine in the form of a <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CCombinedKernel.html">CCombinedKernel</a>. All methods compute their results for a fixed set of these baseline kernels. We later give an example how to use these classes after providing a list of available methods. <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionMedian.html">CMMDKernelSelectionMedian</a> Selects from a set <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html">CGaussianKernel</a> instances the one whose width parameter is closest to the median of the pairwise distances in the data. The median is computed on a certain number of points from each distribution that can be specified as a parameter. Since the median is a stable statistic, one does not have to compute all pairwise distances but rather just a few thousands. This method a useful (and fast) heuristic that in many cases gives a good hint on where to start looking for Gaussian kernel widths. It is for example described in [1]. Note that it may fail badly in selecting a good kernel for certain problems. <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionMax.html">CMMDKernelSelectionMax</a> Selects from a set of arbitrary baseline kernels a single one that maximises the used MMD statistic -- more specific its estimate. $$ k^*=\argmax_{k\in\mathcal{K}} \hat \eta_k, $$ where $\eta_k$ is an empirical MMD estimate for using a kernel $k$. This was first described in [4] and was empirically shown to perform better than the median heuristic above. However, it remains a heuristic that comes with no guarantees. Since MMD estimates can be computed in linear and quadratic time, this method works for both methods. However, for the linear time statistic, there exists a better method. <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionOpt.html">CMMDKernelSelectionOpt</a> Selects the optimal single kernel from a set of baseline kernels. This is done via maximising the ratio of the linear MMD statistic and its standard deviation. $$ k^=\argmax_{k\in\mathcal{K}} \frac{\hat \eta_k}{\hat\sigma_k+\lambda}, $$ where $\eta_k$ is a linear time MMD estimate for using a kernel $k$ and $\hat\sigma_k$ is a linear time variance estimate of $\eta_k$ to which a small number $\lambda$ is added to prevent division by zero. These are estimated in a linear time way with the streaming framework that was described earlier. Therefore, this method is only available for <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html">CLinearTimeMMD</a>. Optimal here means that the resulting test's type II error is minimised for a fixed type I error. Important: For this method to work, the kernel needs to be selected on different* data than the test is performed on. Otherwise, the method will produce wrong results. <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionCombMaxL2.html">CMMDKernelSelectionCombMaxL2</a> Selects a convex combination of kernels that maximises the MMD statistic. This is the multiple kernel analogous to <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionMax.html">CMMDKernelSelectionMax</a>. This is done via solving the convex program $$ \boldsymbol{\beta}^*=\min_{\boldsymbol{\beta}} {\boldsymbol{\beta}^T\boldsymbol{\beta} : \boldsymbol{\beta}^T\boldsymbol{\eta}=\mathbf{1}, \boldsymbol{\beta}\succeq 0}, $$ where $\boldsymbol{\beta}$ is a vector of the resulting kernel weights and $\boldsymbol{\eta}$ is a vector of which each component contains a MMD estimate for a baseline kernel. See [3] for details. Note that this method is unable to select a single kernel -- even when this would be optimal. Again, when using the linear time MMD, there are better methods available. <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionCombOpt.html">CMMDKernelSelectionCombOpt</a> Selects a convex combination of kernels that maximises the MMD statistic divided by its covariance. This corresponds to \emph{optimal} kernel selection in the same sense as in class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionOpt.html">CMMDKernelSelectionOpt</a> and is its multiple kernel analogous. The convex program to solve is $$ \boldsymbol{\beta}^*=\min_{\boldsymbol{\beta}} (\hat Q+\lambda I) {\boldsymbol{\beta}^T\boldsymbol{\beta} : \boldsymbol{\beta}^T\boldsymbol{\eta}=\mathbf{1}, \boldsymbol{\beta}\succeq 0}, $$ where again $\boldsymbol{\beta}$ is a vector of the resulting kernel weights and $\boldsymbol{\eta}$ is a vector of which each component contains a MMD estimate for a baseline kernel. The matrix $\hat Q$ is a linear time estimate of the covariance matrix of the vector $\boldsymbol{\eta}$ to whose diagonal a small number $\lambda$ is added to prevent division by zero. See [3] for details. In contrast to <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionCombMaxL2.html">CMMDKernelSelectionCombMaxL2</a>, this method is able to select a single kernel when this gives a lower type II error than a combination. In this sense, it contains <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionOpt.html">CMMDKernelSelectionOpt</a>. MMD Kernel Selection in Shogun In order to use one of the above methods for kernel selection, one has to create a new instance of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CCombinedKernel.html">CCombinedKernel</a> append all desired baseline kernels to it. This combined kernel is then passed to the MMD class. Then, an object of any of the above kernel selection methods is created and the MMD instance is passed to it in the constructor. There are then multiple methods to call compute_measures to compute a vector kernel selection criteria if a single kernel selection method is used. It will return a vector of selected kernel weights if a combined kernel selection method is used. For \shogunclass{CMMDKernelSelectionMedian}, the method does throw an error. select_kernel returns the selected kernel of the method. For single kernels this will be one of the baseline kernel instances. For the combined kernel case, this will be the underlying <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CCombinedKernel.html">CCombinedKernel</a> instance where the subkernel weights are set to the weights that were selected by the method. In order to utilise the selected kernel, it has to be passed to an MMD instance. We now give an example how to select the optimal single and combined kernel for the Gaussian Blobs dataset. What is the best kernel to use here? This is tricky since the distinguishing characteristics are hidden at a small length-scale. Create some kernels to select the best from
sigmas=[2**x for x in linspace(-5,5, 10)] print "Choosing kernel width from", ["{0:.2f}".format(sigma) for sigma in sigmas] combined=CombinedKernel() for i in range(len(sigmas)): combined.append_kernel(GaussianKernel(10, sigmas[i])) # mmd instance using streaming features block_size=1000 mmd=LinearTimeMMD(combined, gen_p, gen_q, m, block_size) # optmal kernel choice is possible for linear time MMD selection=MMDKernelSelectionOpt(mmd) # select best kernel best_kernel=selection.select_kernel() best_kernel=GaussianKernel.obtain_from_generic(best_kernel) print "Best single kernel has bandwidth %.2f" % best_kernel.get_width()
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
7deb3aaef591a51c89a80d4bf061e915
Now perform two-sample test with that kernel
alpha=0.05 mmd=LinearTimeMMD(best_kernel, gen_p, gen_q, m, block_size) mmd.set_null_approximation_method(MMD1_GAUSSIAN); p_value_best=mmd.perform_test(); print "Bootstrapping: P-value of MMD test with optimal kernel is %.2f" % p_value_best
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
e859918c76fb98f1a2ba68098c2846eb
For the linear time MMD, the null and alternative distributions look different than for the quadratic time MMD as plotted above. Let's sample them (takes longer, reduce number of samples a bit). Note how we can tell the linear time MMD to smulate the null hypothesis, which is necessary since we cannot permute by hand as samples are not in memory)
mmd=LinearTimeMMD(best_kernel, gen_p, gen_q, 5000, block_size) num_samples=500 # sample null and alternative distribution, implicitly generate new data for that null_samples=zeros(num_samples) alt_samples=zeros(num_samples) for i in range(num_samples): alt_samples[i]=mmd.compute_statistic() # tell MMD to merge data internally while streaming mmd.set_simulate_h0(True) null_samples[i]=mmd.compute_statistic() mmd.set_simulate_h0(False)
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
hongguangguo/shogun
gpl-3.0
f46da44b66afe1ae09ed10d1d3a4cd67
ICING tutorial <hr> ICING is a IG clonotype inference library developed in Python. <font color="red"><b>NB:</b></font> This is <font color="red"><b>NOT</b></font> a quickstart guide for ICING. This intended as a detailed tutorial on how ICING works internally. If you're only interested into using ICING, please refer to the Quickstart Manual on github, or the <font color="blue">Quickstart section at the end of this notebook</font>. ICING needs as input a file (TAB-delimited or CSV) which contains, in each row, a particular sequence. The format used is the same as returned by Change-O's MakeDb.py script, which, starting from a IMGT results, it builds a single file with all the information extracted from IMGT starting from the RAW fasta sequences. 0. Data loading Load the dataset into a single pandas dataframe called 'df'. The dataset MUST CONTAIN at least the following columns (NOT case-sensitive): - SEQUENCE_ID - V_CALL - J_CALL - JUNCTION - MUT (only if correct is True)
db_file = '../examples/data/clones_100.100.tab' # dialect="excel" for CSV or XLS files # for computational reasons, let's limit the dataset to the first 1000 sequences X = io.load_dataframe(db_file, dialect="excel-tab")[:1000] # turn the following off if data are real # otherwise, assume that the "SEQUENCE_ID" field is composed as # "[db]_[extension]_[id]_[id-true-clonotype]_[other-info]" # See the example file for the format of the input. X['true_clone'] = [x[3] for x in X.sequence_id.str.split('_')]
notebooks/icing_tutorial.ipynb
slipguru/ignet
bsd-2-clause
a42b9d18ab82e1f60eb81016a159b1e4
1. Preprocessing step: data shrinking Specially in CLL patients, most of the input sequences have the same V genes AND junction. In this case, it is possible to remove such sequences from the analysis (we just need to remember them after.) In other words, we can collapse repeated sequences into a single one, which will weight as high as the number of sequences it represents.
# group by junction and v genes groups = X.groupby(["v_gene_set_str", "junc"]).groups.values() idxs = np.array([elem[0] for elem in groups]) # take one of them weights = np.array([len(elem) for elem in groups]) # assign its weight
notebooks/icing_tutorial.ipynb
slipguru/ignet
bsd-2-clause
ac2fa3d771599588f78a57726bbb0c52
2. High-level group inference The number of sequences at this point may be still very high, in particular when IGs are mutated and there is not much replication. However, we rely on the fact that IG similarity is mainly constrained on their junction length. Therefore, we infer high-level groups based on their junction lengths. This is a fast and efficient step. Also, by exploiting MiniBatchKMeans, we can specify an upperbound on the number of clusters we want to obtain. However, contrary to the standard KMeans algorithm, in this case some clusters may vanish. If one is expected to have related IGs with very different junction lengths, however, it is reasonable to specify a low value of clusters. Keep in mind, however, that a low number of clusters correspond to higher computational workload of the method in the next phases.
n_clusters = 50 X_all = idxs.reshape(-1,1) kmeans = MiniBatchKMeans(n_init=100, n_clusters=min(n_clusters, X_all.shape[0])) lengths = X['junction_length'].values kmeans.fit(lengths[idxs].reshape(-1,1))
notebooks/icing_tutorial.ipynb
slipguru/ignet
bsd-2-clause
ff078efd3139b61ee37ed2962361f9d3
3. Fine-grained group inference Now we have higih-level groups of IGs we have to extract clonotypes from. Divide the dataset based on the labels extracted from MiniBatchKMeans. For each one of the cluster, find clonotypes contained in it using DBSCAN. This algorithm allows us to use a custom metric between IGs. [<font color='blue'><b>ADVANCED</b></font>] To develop a custom metric, see the format of icing.core.distances.distance_dataframe. If you use a custom function, then you only need to put it as parameter of DBSCAN metric. Note that partial is required if the metric has more than 2 parameters. To be a valid metric for DBSCAN, the function must take ONLY two params (the two elements to compare). For this reason, the other arguments are pre-computed with partial in the following example.
dbscan = DBSCAN(min_samples=20, n_jobs=-1, algorithm='brute', eps=0.2, metric=partial(distance_dataframe, X, junction_dist=distances.StringDistance(model='ham'), correct=True, tol=0)) dbscan_labels = np.zeros_like(kmeans.labels_).ravel() for label in np.unique(kmeans.labels_): idx_row = np.where(kmeans.labels_ == label)[0] X_idx = idxs[idx_row].reshape(-1,1).astype('float64') weights_idx = weights[idx_row] if idx_row.size == 1: db_labels = np.array([0]) db_labels = dbscan.fit_predict(X_idx, sample_weight=weights_idx) if len(dbscan.core_sample_indices_) < 1: db_labels[:] = 0 if -1 in db_labels: # this means that DBSCAN found some IG as noise. We choose to assign to the nearest cluster balltree = BallTree( X_idx[dbscan.core_sample_indices_], metric=dbscan.metric) noise_labels = balltree.query( X_idx[db_labels == -1], k=1, return_distance=False).ravel() # get labels for core points, then assign to noise points based # on balltree dbscan_noise_labels = db_labels[ dbscan.core_sample_indices_][noise_labels] db_labels[db_labels == -1] = dbscan_noise_labels # hopefully, there are no noisy samples at this time db_labels[db_labels > -1] = db_labels[db_labels > -1] + np.max(dbscan_labels) + 1 dbscan_labels[idx_row] = db_labels # + np.max(dbscan_labels) + 1 labels = dbscan_labels # new part: put together the labels labels_ext = np.zeros(X.shape[0], dtype=int) labels_ext[idxs] = labels for i, list_ in enumerate(groups): labels_ext[list_] = labels[i] labels = labels_ext
notebooks/icing_tutorial.ipynb
slipguru/ignet
bsd-2-clause
a4c14a9bb13d129f5082c3cd871297ba
Quickstart <hr> All of the above-mentioned steps are integrated in ICING with a simple call to the class inference.ICINGTwoStep. The following is an example of a working script.
db_file = '../examples/data/clones_100.100.tab' correct = True tolerance = 0 X = io.load_dataframe(db_file)[:1000] # turn the following off if data are real X['true_clone'] = [x[3] for x in X.sequence_id.str.split('_')] true_clones = LabelEncoder().fit_transform(X.true_clone.values) ii = inference.ICINGTwoStep( model='nt', eps=0.2, method='dbscan', verbose=True, kmeans_params=dict(n_init=100, n_clusters=20), dbscan_params=dict(min_samples=20, n_jobs=-1, algorithm='brute', metric=partial(distance_dataframe, X, **dict( junction_dist=StringDistance(model='ham'), correct=correct, tol=tolerance)))) tic = time.time() labels = ii.fit_predict(X) tac = time.time() - tic print("\nElapsed time: %.1fs" % tac)
notebooks/icing_tutorial.ipynb
slipguru/ignet
bsd-2-clause
70eb31ef84f8d0dff3b88124503e3195
If you want to save the results:
X['icing_clones (%s)' % ('_'.join(('StringDistance', str(eps), '0', 'corr' if correct else 'nocorr', "%.4f" % tac)))] = labels X.to_csv(db_file.split('/')[-1] + '_icing.csv')
notebooks/icing_tutorial.ipynb
slipguru/ignet
bsd-2-clause
fc4d6711ed81e9ac967426a41a278176
How is the result?
from sklearn import metrics true_clones = LabelEncoder().fit_transform(X.true_clone.values) print "FMI: %.5f" % (metrics.fowlkes_mallows_score(true_clones, labels)) print "ARI: %.5f" % (metrics.adjusted_rand_score(true_clones, labels)) print "AMI: %.5f" % (metrics.adjusted_mutual_info_score(true_clones, labels)) print "NMI: %.5f" % (metrics.normalized_mutual_info_score(true_clones, labels)) print "Hom: %.5f" % (metrics.homogeneity_score(true_clones, labels)) print "Com: %.5f" % (metrics.completeness_score(true_clones, labels)) print "Vsc: %.5f" % (metrics.v_measure_score(true_clones, labels))
notebooks/icing_tutorial.ipynb
slipguru/ignet
bsd-2-clause
8535f4d2f6d18a5af0b29f99f81314e4
Is it better or worse than the result with everyone at the same time?
labels = dbscan.fit_predict(np.arange(X.shape[0]).reshape(-1, 1)) print "FMI: %.5f" % metrics.fowlkes_mallows_score(true_clones, labels) print "ARI: %.5f" % (metrics.adjusted_rand_score(true_clones, labels)) print "AMI: %.5f" % (metrics.adjusted_mutual_info_score(true_clones, labels)) print "NMI: %.5f" % (metrics.normalized_mutual_info_score(true_clones, labels)) print "Hom: %.5f" % (metrics.homogeneity_score(true_clones, labels)) print "Com: %.5f" % (metrics.completeness_score(true_clones, labels)) print "Vsc: %.5f" % (metrics.v_measure_score(true_clones, labels))
notebooks/icing_tutorial.ipynb
slipguru/ignet
bsd-2-clause
7aadba54a34ba668f2010c9b407a7fc2
Now fit using the XID+ interface to pystan
%%time from xidplus.stan_fit import SPIRE fit=SPIRE.all_bands(prior250,prior350,prior500,iter=1000)
docs/notebooks/examples/XID+example_run_script.ipynb
pdh21/XID_plus
mit
0828699548bf1fa8ba288611c028f656
Initialise the posterior class with the fit object from pystan, and save alongside the prior classes
posterior=xidplus.posterior_stan(fit,[prior250,prior350,prior500]) xidplus.save([prior250,prior350,prior500],posterior,'test')
docs/notebooks/examples/XID+example_run_script.ipynb
pdh21/XID_plus
mit
2e23f0b430cb238d0a885d519dc17b77
Alternatively, you can fit with the pyro backend.
%%time from xidplus.pyro_fit import SPIRE fit_pyro=SPIRE.all_bands([prior250,prior350,prior500],n_steps=10000,lr=0.001,sub=0.1) posterior_pyro=xidplus.posterior_pyro(fit_pyro,[prior250,prior350,prior500]) xidplus.save([prior250,prior350,prior500],posterior_pyro,'test_pyro') plt.semilogy(posterior_pyro.loss_history)
docs/notebooks/examples/XID+example_run_script.ipynb
pdh21/XID_plus
mit
cbdc958381ec00a0905200463feea677
You can fit with the numpyro backend.
%%time from xidplus.numpyro_fit import SPIRE fit_numpyro=SPIRE.all_bands([prior250,prior350,prior500]) posterior_numpyro=xidplus.posterior_numpyro(fit_numpyro,[prior250,prior350,prior500]) xidplus.save([prior250,prior350,prior500],posterior_numpyro,'test_numpyro') prior250.bkg
docs/notebooks/examples/XID+example_run_script.ipynb
pdh21/XID_plus
mit
748b72db0ec3881899c3554742f6df3f
We will want to run the notebook in the future with updated values. How can we do this? Make the dates automatically updated.
start = datetime.datetime(2017, 3, 2) # the day Snap went public end = datetime.date.today() # datetime.date.today snap = web.DataReader("SNAP", 'google', start, end) snap snap.index.tolist()
Code/notebooks/bootcamp_format_plotting.ipynb
NYUDataBootcamp/Materials
mit
7746ad99d264e8ffdc2710535a6681d0
.format() We want to print something with systematic changes in the text. Suppose we want to print out the following information: 'On day X Snap closed at VALUE Y and the volume was Z.'
# How did we do this before? for index in snap.index: print('On day', index, 'Snap closed at', snap['Close'][index], 'and the volume was', snap['Volume'][index], '.')
Code/notebooks/bootcamp_format_plotting.ipynb
NYUDataBootcamp/Materials
mit
22aa70d8ac51816d70d38d6d09c282dc
This looks aweful. We want to cut the day and express the volume in millions.
# express Volume in millions snap['Volume'] = snap['Volume']/10**6 snap
Code/notebooks/bootcamp_format_plotting.ipynb
NYUDataBootcamp/Materials
mit
8bf4c1984ce224209db3cfd33cfcb1ae
The .format() method what is format and how does it work? Google and find a good link
print('Today is {}.'.format(datetime.date.today())) for index in snap.index: print('On {} Snap closed at ${} and the volume was {} million.'.format(index, snap['Close'][index], snap['Volume'][index])) for index in snap.index: print('On {:.10} Snap closed at ${} and the volume was {:.1f} million.'.format(str(index), snap['Close'][index], snap['Volume'][index]))
Code/notebooks/bootcamp_format_plotting.ipynb
NYUDataBootcamp/Materials
mit
3289d2b59ea42b5554f8181ec125ce36
Check Olson's blog and style recommendation
fig, ax = plt.subplots() #figsize=(8,5)) snap['Close'].plot(ax=ax, grid=True, style='o', alpha=.6) ax.set_xlim([snap.index[0]-datetime.timedelta(days=1), snap.index[-1]+datetime.timedelta(days=1)]) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.yaxis.set_ticks_position('left') ax.xaxis.set_ticks_position('bottom') ax.vlines(snap.index, snap['Low'], snap['High'], alpha=.2, lw=.9) ax.set_ylabel('SNAP share price', fontsize=14) ax.set_xlabel('Date', fontsize=14) plt.show() start_w = datetime.datetime(2008, 6, 8) oilwater = web.DataReader(['BP', 'AWK'], 'google', start_w, end) oilwater.describe type(oilwater[:,:,'AWK']) water = oilwater[:, :, 'AWK'] oil = oilwater[:, :, 'BP'] #import seaborn as sns #import matplotlib as mpl #mpl.rcParams.update(mpl.rcParamsDefault) plt.style.use('seaborn-notebook') plt.rc('font', family='serif') deepwater = datetime.datetime(2010, 4, 20) fig, ax = plt.subplots(figsize=(8, 5)) water['Close'].plot(ax=ax, label='AWK', lw=.7) #grid=True, oil['Close'].plot(ax=ax, label='BP', lw=.7) #grid=True, ax.yaxis.grid(True) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.yaxis.set_ticks_position('left') ax.xaxis.set_ticks_position('bottom') ax.vlines(deepwater, 0, 100, linestyles='dashed', alpha=.6) ax.text(deepwater, 70, 'Deepwater catastrophe', horizontalalignment='center') ax.set_ylim([0, 100]) ax.legend(bbox_to_anchor=(1.2, .9), frameon=False) plt.show() print(plt.style.available) fig, ax = plt.subplots(figsize=(8, 5)) water['AWK_pct_ch'] = water['Close'].diff().cumsum()/water['Close'].iloc[0] oil['BP_pct_ch'] = oil['Close'].diff().cumsum()/oil['Close'].iloc[0] #water['Close'].pct_change().cumsum().plot(ax=ax, label='AWK') water['AWK_pct_ch'].plot(ax=ax, label='AWK', lw=.7) #oil['Close'].pct_change().cumsum().plot(ax=ax, label='BP') oil['BP_pct_ch'].plot(ax=ax, label='BP', lw=.7) ax.yaxis.grid(True) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.yaxis.set_ticks_position('left') ax.xaxis.set_ticks_position('bottom') ax.vlines(deepwater, -1, 3, linestyles='dashed', alpha=.6) ax.text(deepwater, 1.2, 'Deepwater catastrophe', horizontalalignment='center') ax.set_ylim([-1, 3]) ax.legend(bbox_to_anchor=(1.2, .9), frameon=False) ax.set_title('Percentage change relative to {:.10}\n'.format(str(start_w)), fontsize=14, loc='left') plt.show()
Code/notebooks/bootcamp_format_plotting.ipynb
NYUDataBootcamp/Materials
mit
ea02aec613f68d0974a86796e6000d68
Machine Dependent Options Each installation of GPy also creates an installation.cfg file. This file should include any installation specific settings for your GPy installation. For example, if a particular machine is set up to run OpenMP then the installation.cfg file should contain
# This is the local installation configuration file for GPy [parallel] openmp=True
GPy/config.ipynb
SheffieldML/notebook
bsd-3-clause
9f7df9dad8be1feba7a78d18e2e12f5a
在可变的集合类型中(list和dictionary)中,如果默认参数为该类型,那么所有的操作调用该函数的操作将会发生变化
def foo(values, x=[]): for value in values: x.append(value) return x foo([0,1,2]) foo([4,5]) def foo_fix(values, x=[]): if len(x) != 0: x = [] for value in values: x.append(value) return x foo_fix([0,1,2]) foo_fix([4,5])
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
gaufung/Data_Analytics_Learning_Note
mit
b75088c7655a47f04fd6afba9ba0f1b0
2 global 参数
x = 5 def set_x(y): x = y print 'inner x is {}'.format(x) set_x(10) print 'global x is {}'.format(x)
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
gaufung/Data_Analytics_Learning_Note
mit
fa38106c5ab140e865c353620893603e
x = 5 表明为global变量,但是在set_x函数内部中,出现了x,但是其为局部变量,因此全局变量x并没有发生改变。
def set_global_x(y): global x x = y print 'global x is {}'.format(x) set_global_x(10) print 'global x now is {}'.format(x)
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
gaufung/Data_Analytics_Learning_Note
mit
7a4260709cc0ce4bab5ecf6701e5f94e
通过添加global关键字,使得global变量x发生了改变。 3 Exercise Fibonacci sequence $F_{n+1}=F_{n}+F_{n-1}$ 其中 $F_{0}=0,F_{1}=1,F_{2}=1,F_{3}=2 \cdots$ 递归版本 算法时间时间复杂度高达 $T(n)=n^2$
def fib_recursive(n): if n == 0 or n == 1: return n else: return fib_recursive(n-1) + fib_recursive(n-2) fib_recursive(10)
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
gaufung/Data_Analytics_Learning_Note
mit
9b735dfc6d9e63229d1d7a02b7093265
迭代版本 算法时间复杂度为$T(n)=n$
def fib_iterator(n): g = 0 h = 1 i = 0 while i < n: h = g + h g = h - g i += 1 return g fib_iterator(10)
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
gaufung/Data_Analytics_Learning_Note
mit
59d7ce1646fc380f6a5dcf20af7a80f5
迭代器版本 使用 yield 关键字可以实现迭代器
def fib_iter(n): g = 0 h = 1 i = 0 while i < n: h = g + h g = h -g i += 1 yield g for value in fib_iter(10): print value,
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
gaufung/Data_Analytics_Learning_Note
mit
6aca22eac46898a1df1513e583180f19
矩阵求解法 $$\begin{bmatrix}F_{n+1}\F_{n}\end{bmatrix}=\begin{bmatrix}1&1\1&0\end{bmatrix}\begin{bmatrix}F_{n}\F_{n-1}\end{bmatrix}$$ 令$u_{n+1}=Au_{n}$ 其中 $u_{n+1}=\begin{bmatrix}F_{n+1}\F_{n}\end{bmatrix}$ 通过矩阵的迭代求解 $u_{n+1}=A^{n}u_{0}$,其中 $u_{0}=\begin{bmatrix}1 \0 \end{bmatrix}$,对于$A^n$ 可以通过 $(A^{n/2})^{2}$ 方式求解,使得算法时间复杂度达到 $log(n)$
import numpy as np a = np.array([[1,1],[1,0]]) def pow_n(n): if n == 1: return a elif n % 2 == 0: half = pow_n(n/2) return half.dot(half) else: half = pow_n((n-1)/2) return a.dot(half).dot(half) def fib_pow(n): a_n = pow_n(n) u_0 = np.array([1,0]) return a_n.dot(u_0)[1] fib_pow(10)
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
gaufung/Data_Analytics_Learning_Note
mit
a54dd5e6593806764be2cd0805d03102
Quick Sort
def quick_sort(array): if len(array) < 2: return array else: pivot = array[0] left = [item for item in array[1:] if item < pivot] right = [item for item in array[1:] if item >= pivot] return quick_sort(left)+[pivot]+quick_sort(right) quick_sort([10,11,3,21,9,22])
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
gaufung/Data_Analytics_Learning_Note
mit
b314b31677c576d9a9b8a9fe70435853
Y yo para que quiero eso? De que sirve pandas? Pandas te sirve si quieres: Trabajar con datos de manera facil. Explorar un conjunto de datos de manera rapida, enterder los datos que tienes. Facilmente manipular informacion, por ejemplo sacar estadisticas. Graficas patrones y distribuciones de datos. Trabajar con Exceles, base de datos, sin tener que suar esas herramientas. Y mucho mas... El DataFrame en Pandas Una estructura de datos en Pandas se llama un DataFrame, con el manejamos todos los datos y aplicamos tranformaciones. Asi creamos un DataFrame vacio:
df.head()
Dia1/.ipynb_checkpoints/2_PandasIntro-checkpoint.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
2d032c8b97962e6424d520a3e6b8d2c9
No nos sirve nada vacio, entonces agreguemos le informacion! LLenando informacion con un Dataframe Situacion: Suponte que eres un taquero y quieres hacer un dataframe de cuantos tacos vendes en una semana igual y para ver que tacos son mas populares y echarle mas ganas en ellos, Asumiremos: Que vende tacos de Pastor, Tripa y Chorizo Hay 7 dias en una semana de Lunes a Domingo (obvio) Crearemos el numero de tacos como numeros enteros aleatorios (np.random.randint) Ojo! Si ponemos la variable de un dataframe al final de una celda no saldra una tabla con los datos, eah!
df['Pastor']=np.random.randint(100, size=7) df['Tripas']=np.random.randint(100, size=7) df['Chorizo']=np.random.randint(100, size=7) df.index=['Lunes','Martes','Miercoles','Jueves','Viernes','Sabado','Domingo'] df.
Dia1/.ipynb_checkpoints/2_PandasIntro-checkpoint.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
b38101788d34d4d05db016644831aea8
Lesson 1 Create Data - We begin by creating our own data set for analysis. This prevents the end user reading this tutorial from having to download any files to replicate the results below. We will export this data set to a text file so that you can get some experience pulling data from a text file. Get Data - We will learn how to read in the text file. The data consist of baby names and the number of baby names born in the year 1880. Prepare Data - Here we will simply take a look at the data and make sure it is clean. By clean I mean we will take a look inside the contents of the text file and look for any anomalities. These can include missing data, inconsistencies in the data, or any other data that seems out of place. If any are found we will then have to make decisions on what to do with these records. Analyze Data - We will simply find the most popular name in a specific year. Present Data - Through tabular data and a graph, clearly show the end user what is the most popular name in a specific year. The pandas library is used for all the data analysis excluding a small piece of the data presentation section. The matplotlib library will only be needed for the data presentation section. Importing the libraries is the first step we will take in the lesson.
# Import all libraries needed for the tutorial # General syntax to import specific functions in a library: ##from (library) import (specific library function) from pandas import DataFrame, read_csv # General syntax to import a library but no functions: ##import (library) as (give the library a nickname/alias) import matplotlib.pyplot as plt import pandas as pd #this is how I usually import pandas import sys #only needed to determine Python version number # Enable inline plotting %matplotlib inline print 'Python version ' + sys.version print 'Pandas version ' + pd.__version__
notebooks/pandas_tutorial.ipynb
babraham123/script-runner
mit
204b801c3d6f2ec439f3e058cb9cb303
Single computer get data (5) # usdgbp
demo.get_price('GBPUSD') process.processSinglePrice() demo.get_price('USDEUR') process.processSinglePrice() demo.get_price('EURGBP') process.processSinglePrice() demo.get_prices(1) process.processPrices(3)
demos/demo1/00_DEMO_01.ipynb
mhallett/MeDaReDa
mit
0411242ba472bdfc091bbc74de8610a9
limitations Multi computer (cloud) set workers to work
while True: process.processSinglePrice() #break
demos/demo1/00_DEMO_01.ipynb
mhallett/MeDaReDa
mit
dd500d06fb07f8c3466941d2708599da
Trying reduce without and with an initializer.
from operator import add for result in (reduce(add, [42]), reduce(add, [42], 10)): print(result)
content/posts/coding/recursion_looping_relationship.ipynb
dm-wyncode/zipped-code
mit
b5973f1f94d26d5613a71a5bcce02e8a
My rewrite of functools.reduce using recursion. For the sake of demonstration only.
def first(value_list): return value_list[0] def rest(value_list): return value_list[1:] def is_undefined(value): return value is None def recursive_reduce(function, iterable, initializer=None): if is_undefined(initializer): initializer = accum_value = first(iterable) else: accum_value = function(initializer, first(iterable)) if len(iterable) == 1: # base case return accum_value return recursive_reduce(function, rest(iterable), accum_value)
content/posts/coding/recursion_looping_relationship.ipynb
dm-wyncode/zipped-code
mit
fd9a78031d6633375793dd8011d13ade
Test. Test if the two functions return the sum of a list of random numbers.
from random import choice from operator import add LINE = ''.join(('-', ) * 20) print(LINE) for _ in range(5): # create a tuple of random numbers of length 2 to 10 test_values = tuple(choice(range(101)) for _ in range(choice(range(2, 11)))) print('Testing these values: {}'.format(test_values)) # use sum for canonical value expected = sum(test_values) print('The expected result: {}\n'.format(expected)) test_answers = ((f.__name__, f(add, test_values)) for f in (reduce, recursive_reduce)) test_results = ((f_name, test_answer == expected, ) for f_name, test_answer in test_answers) for f_name, answer in test_results: try: assert answer print('`{}` passed: {}'.format(f_name, answer)) except AssertionError: print('`{}` failed: {}'.format(f_name, not answer)) print(LINE) from recursion_looping_relationship_meta import tweets
content/posts/coding/recursion_looping_relationship.ipynb
dm-wyncode/zipped-code
mit
70ff28ded94ac529907734169f639f3b
Then let us generate some points in 2-D that will form our dataset:
# Create some data points
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
c49583c28c3a6d881add159c674acbc9
Let's visualise these points in a scatterplot using the plot function from matplotlib
# Visualise the points in a scatterplot
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
8d4afe3b5515005fe4928766020fe1c1
Here, imagine that the purpose is to build a classifier that for a given new point will return whether it belongs to the crosses (class 1) or circles (class 0). Learning Activity 2: Computing the output of a Perceptron Let’s now define a function which returns the output of a Perceptron for a single input point.
# Now let's build a perceptron for our points def outPerceptron(x,w,b): innerProd = np.dot(x,w) # computes the weighted sum of input output = 0 if innerProd > b: output = 1 return output
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
eeebe8e50742e98d559c04fb7586f9af
It’s useful to define a function which returns the sequence of outputs of the Perceptron for a sequence of input points:
# Define a function which returns the sequence of outputs for a sequence of input points def multiOutPerceptron(X,w,b): nInstances = X.shape[0] outputs = np.zeros(nInstances) for i in range(0,nInstances): outputs[i] = outPerceptron(X[i,:],w,b) return outputs
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
888be29ceff7552a163a68c399318432
Bonus Activity: Efficient coding of multiOutPerceptron In the above implementation, the simple outPerceptron function is called for every single instance. It is cleaner and more efficient to code everything in one function using matrices:
# Optimise the multiOutPerceptron function
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
fe6ec72e538bb340a289190d97a819da
In the above implementation, the simple outPerceptron function is called for every single instance. It is cleaner and more efficient to code everything in one function using matrices. Learning Activity 4: Playing with weights and thresholds Let’s try some weights and thresholds, and see what happens:
# Try some initial weights and thresholds
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
572b8267a1f3d603e34f4a4058698586
So this is clearly not great! it classifies the first point as in one category and all the others in the other one. Let's try something else (an educated guess this time).
# Try an "educated guess"
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
c5dc1ca16d58750e161255587fd89b19
This is much better! To obtain these values, we found a separating hyperplane (here a line) between the points. The equation of the line is y = 0.5x-0.2 Quiz - Can you explain why this line corresponds to the weights and bias we used? - Is this separating line unique? what does it mean? Can you check that the perceptron will indeed classify any point above the red line as a 1 (cross) and every point below as a 0 (circle)? Learning Activity 5: Illustration of the output of the Perceptron and the separating line
# Visualise the separating line
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
e93f092e321d4534465a36d3b22ddc8b
Now try adding new points to see how they are classified:
# Add new points and test
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
55652145913c0c7ab7a002275e4ee4f4
Visualise the new test points in the graph and plot the separating lines.
# Visualise the new points and line
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
5e0344c8fbd4fbe6c60c0207976d1933
Note here that the two sets of parameters classify the squares identically but not the triangle. You can now ask yourself, which one of the two sets of parameters makes more sense? How would you classify that triangle? These type of points are frequent in realistic datasets and the question of how to classify them "accurately" is often very hard to answer... Gradient Descent Learning Activity 6: Coding a simple gradient descent Definition of a function and it's gradient $f(x) = \exp(-\sin(x))x^2$ $f'(x) = -x \exp(-\sin(x)) (x\cos(x)-2)$ It is convenient to define python functions which return the value of the function and its gradient at an arbitrary point $x$
def function(x): return np.exp(-np.sin(x))*(x**2) def gradient(x): return -x*np.exp(-np.sin(x))*(x*np.cos(x)-2) # use wolfram alpha!
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
010ca3b4c082ad091e6d49812b96351d
Let's see what the function looks like
# Visualise the function
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
e77057260707ce891d040bc43115be19
Now let us implement a simple Gradient Descent that uses constant stepsizes. We define two functions, the first one is the most simple version which doesn't store the intermediate steps that are taken. The second one does store the steps which is useful to visualize what is going on and explain some of the typical behaviour of GD.
def simpleGD(x0,stepsize,nsteps): x = x0 for k in range(0,nsteps): x -= stepsize*gradient(x) return x def simpleGD2(x0,stepsize,nsteps): x = np.zeros(nsteps+1) x[0] = x0 for k in range(0,nsteps): x[k+1] = x[k]-stepsize*gradient(x[k]) return x
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
8c9aaa8eabc24e8b30698668f9f3880b
Let's see what it looks like. Let's start from $x_0 = 3$, use a (constant) stepsize of $\delta=0.1$ and let's go for 100 steps.
# Try the first given values
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
c6c1ce074f2417d5ffb798ff4a2c8091
Simple inspection of the figure above shows that that is close enough to the actual true minimum ($x^\star=0$) A few standard situations:
# Try the second given values
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
021de5c9506b13a1f6a4f1134f34831f
Ok! so that's still alright
# Try the third given values
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
e4d0ac15f06a73bcdf93351541f70bac
That's not... Visual inspection of the figure above shows that we got stuck in a local optimum. Below we define a simple visualization function to show where the GD algorithm brings us. It can be overlooked.
def viz(x,a=-10,b=10): xx = np.linspace(a,b,100) yy = function(xx) ygd = function(x) plt.plot(xx,yy) plt.plot(x,ygd,color='red') plt.plot(x[0],ygd[0],marker='o',color='green',markersize=10) plt.plot(x[len(x)-1],ygd[len(x)-1],marker='o',color='red',markersize=10) plt.show()
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
5bd84f87273f4e4c1cc4d894ebf78d42
Let's show the steps that were taken in the various cases that we considered above
# Visualise the steps taken in the previous cases
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
77bee69ba30c6286553385f71fce3a9c
To summarise these three cases: - In the first case, we start from a sensible point (not far from the optimal value $x^\star = 0$ and on a slope that leads directly to it) and we get to a very satisfactory point. - In the second case, we start from a less sensible point (on a slope that does not lead directly to it) and yet the algorithm still gets us to a very satisfactory point. - In the third case, we also start from a bad location but this time the algorithm gets stuck in a local minima. Attacking MNIST Learning Activity 7: Loading the Python libraries Import statements for KERAS library
from keras.datasets import mnist from keras.models import Sequential from keras.layers.core import Dense, Activation from keras.optimizers import SGD, RMSprop from keras.utils import np_utils # Some generic parameters for the learning process batch_size = 100 # number of instances each noisy gradient will be evaluated upon nb_classes = 10 # 10 classes 0-1-...-9 nb_epoch = 10 # computational budget: 10 passes through the whole dataset
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
130085ad9c92368a305c1ba0c3a48b6b
Learning Activity 8: Loading the MNIST dataset Keras does the loading of the data itself and shuffles the data randomly. This is useful since the difficulty of the examples in the dataset is not uniform (the last examples are harder than the first ones)
# Load the MNIST data
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
608f45ca085374a192b9ae74509e449f
You can also depict a sample from either the training or the test set using the imshow() function:
# Display the first image
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
c440872029a623aef25d365209012a3c
Ok the label 5 does indeed seem to correspond to that number! Let's check the dimension of the dataset Learning Activity 9: Reshaping the dataset Each image in MNIST has 28 by 28 pixels, which results in a $28\times 28$ array. As a next step, and prior to feeding the data into our NN classifier, we needd to flatten each array into a $28\times 28$=784 dimensional vector. Each component of the vector holds an integer value between 0 (black) and 255 (white), which we need to normalise to the range 0 and 1.
# Reshaping of vectors in a format that works with the way the layers are coded
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
598a43807f7cd83029bfb3c7c452f1ba
Remember, it is always good practice to check the dimensionality of your train and test data using the shape command prior to constructing any classification model:
# Check the dimensionality of train and test
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
66e47872d9714033eae296e7bed90e60
So we have 60,000 training samples, 10,000 test samples and the dimension of the samples (instances) are 28x28 arrays. We need to reshape these instances as vectors (of 784=28x28 components). For storage efficiency, the values of the components are stored as Uint8, we need to cast that as float32 so that Keras can deal with them. Finally we normalize the values to the 0-1 range. The labels are stored as integer values from 0 to 9. We need to tell Keras that these form the output categories via the function to_categorical.
# Set y categorical
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
0d11988c5ad88e47539e326655eb7d57
Learning Activity 10: Building a NN classifier A neural network model consists of artificial neurons arranged in a sequence of layers. Each layer receives a vector of inputs and converts these into some output. The interconnection pattern is "dense" meaning it is fully connected to the previous layer. Note that the first hidden layer needs to specify the size of the input which amounts to implicitly having an input layer.
# First, declare a model with a sequential architecture # Then add a first layer with 500 nodes and 784 inputs (the pixels of the image) # Define the activation function to use on the nodes of that first layer # Second hidden layer with 300 nodes # Output layer with 10 categories (+using softmax)
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
20625b27fb9d93696c12e1b9aa5ec793
Learning Activity 11: Training and testing of the model Here we define a somewhat standard optimizer for NN. It is based on Stochastic Gradient Descent with some standard choice for the annealing.
# Definition of the optimizer.
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
941045428347c083ce80d52a42605575
Finding the right arguments here is non trivial but the choice suggested here will work well. The only parameter we can explain here is the first one which can be understood as an initial scaling of the gradients. At this stage, launch the learning (fit the model). The model.fit function takes all the necessary arguments and trains the model. We describe below what these arguments are: the training set (points and labels) global parameters for the learning (batch size and number of epochs) whether or not we want to show output during the learning the test set (points and labels)
# Fit the model
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
9ec81b9ac1d9b15885a180b3ffa0a627
Obviously we care far more about the results on the validation set since it is the data that the NN has not used for its training. Good results on the test set means the model is robust.
# Display the results, the accuracy (over the test set) should be in the 98%
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
03c7fde7f61b5da3e6d6b523561d0b9d
Bonus: Does it work?
def whatAmI(img): score = model.predict(img,batch_size=1,verbose=0) for s in range(0,10): print ('Am I a ', s, '? -- score: ', np.around(score[0][s]*100,3)) index = 1004 # here use anything between 0 and 9999 test = np.reshape(images_train[index,],(1,784)) plt.imshow(np.reshape(test,(28,28)), cmap="gray") whatAmI(test)
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
92f075baf829d84bb15824e8f7f1657b
Does it work? (experimental Pt2)
from scipy import misc test = misc.imread('data/ex7.jpg') test = np.reshape(test,(1,784)) test = test.astype('float32') test /= 255. plt.imshow(np.reshape(test,(28,28)), cmap="gray") whatAmI(test)
misc/machinelearningbootcamp/day2/neural_nets.ipynb
kinshuk4/MoocX
mit
1e060c1083e3ee7ba7dc7029f0dd8227
To keep the calculations below manageable we specify a single nside=64 healpixel in an arbitrary location of the DESI footprint.
healpixel = 26030 nside = 64
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
8062dfcc4dca21e86a54adc04c0dbd18
Specifying the random seed makes our calculations reproducible.
seed = 555 rand = np.random.RandomState(seed)
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
1746a68c57362be2aadfccbea234184f
Define a couple wrapper routines we will use below several times.
def plot_subset(wave, flux, truth, objtruth, nplot=16, ncol=4, these=None, xlim=None, loc='right', targname='', objtype=''): """Plot a random sampling of spectra.""" nspec, npix = flux.shape if nspec < nplot: nplot = nspec nrow = np.ceil(nplot / ncol).astype('int') if loc == 'left': xtxt, ytxt, ha = 0.05, 0.93, 'left' else: xtxt, ytxt, ha = 0.93, 0.93, 'right' if these is None: these = rand.choice(nspec, nplot, replace=False) these = np.sort(these) ww = (wave > 5500) * (wave < 5550) fig, ax = plt.subplots(nrow, ncol, figsize=(2.5*ncol, 2*nrow), sharey=False, sharex=True) for thisax, indx in zip(ax.flat, these): thisax.plot(wave, flux[indx, :] / np.median(flux[indx, ww])) if objtype == 'STAR' or objtype == 'WD': thisax.text(xtxt, ytxt, r'$T_{{eff}}$={:.0f} K'.format(objtruth['TEFF'][indx]), ha=ha, va='top', transform=thisax.transAxes, fontsize=13) else: thisax.text(xtxt, ytxt, 'z={:.3f}'.format(truth['TRUEZ'][indx]), ha=ha, va='top', transform=thisax.transAxes, fontsize=13) thisax.xaxis.set_major_locator(plt.MaxNLocator(3)) if xlim: thisax.set_xlim(xlim) for thisax in ax.flat: thisax.yaxis.set_ticks([]) thisax.margins(0.2) fig.suptitle(targname) fig.subplots_adjust(wspace=0.05, hspace=0.05, top=0.93)
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
780bffd6139e62596ab75dc11c21fc23
Tracer QSOs Both tracer and Lya QSO spectra contain an underlying QSO spectrum, but the Lya QSOs (which we demonstrate below) also include the Lya forest (here, based on the v2.0 of the "London" mocks). Every target class has its own dedicated "Maker" class.
from desitarget.mock.mockmaker import QSOMaker QSO = QSOMaker(seed=seed)
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
1af60f8dfef45422e771d56ac851798a
The various read methods return a dictionary with (hopefully self-explanatory) target- and mock-specific quantities. Because most mock catalogs only come with (cosmologically accurate) 3D positions (RA, Dec, redshift), we use Gaussian mixture models trained on real data to assign other quantities like shapes, magnitudes, and colors, depending on the target class. For more details see the gmm-dr7.pynb Python notebook.
dir(QSOMaker) data = QSO.read(healpixels=healpixel, nside=nside) for key in sorted(list(data.keys())): print('{:>20}'.format(key))
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
cc11272ba621c0d8cea41f8f79ed0914
Now we can generate the spectra as well as the targeting catalogs (targets) and corresponding truth table.
%time flux, wave, targets, truth, objtruth = QSO.make_spectra(data) print(flux.shape, wave.shape)
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
05ea7eaba33bffb6eacc6dbe291a212d
The truth catalog contains the target-type-agnostic, known properties of each object (including the noiseless photometry), while the objtruth catalog contains different information depending on the type of target.
truth objtruth
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
e14bd2aa69fee28e59b6199f1d7db3ab
Next, let's run target selection, after which point the targets catalog should look just like an imaging targeting catalog (here, using the DR7 data model).
QSO.select_targets(targets, truth) targets
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
d6722b016ece3b3b3983195239b6c242
And indeed, we can see that only a subset of the QSOs were identified as targets (the rest scattered out of the QSO color selection boxes).
from desitarget.targetmask import desi_mask isqso = (targets['DESI_TARGET'] & desi_mask.QSO) != 0 print('Identified {} / {} QSO targets.'.format(np.count_nonzero(isqso), len(targets)))
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
2217390afbe09d6a7c414de6ba05ce57
Finally, let's plot some example spectra.
plot_subset(wave, flux, truth, objtruth, targname='QSO')
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
1a85535b7a2d004bc5d593ebb1f84b80
Generating QSO spectra with cosmological Lya skewers proceeds along similar lines. Here, we also include BALs with 25% probability.
from desitarget.mock.mockmaker import LYAMaker mockfile='/project/projectdirs/desi/mocks/lya_forest/london/v9.0/v9.0.0/master.fits' LYA = LYAMaker(seed=seed, balprob=0.25) lyadata = LYA.read(mockfile=mockfile,healpixels=healpixel, nside=nside) %time lyaflux, lyawave, lyatargets, lyatruth, lyaobjtruth = LYA.make_spectra(lyadata) lyaobjtruth plot_subset(lyawave, lyaflux, lyatruth, lyaobjtruth, xlim=(3500, 5500), targname='LYA') #Now lets generate the same spectra but including the different features and the new continum model. #For this we need to reload the desitarget module, for some reason it seems not be enough with defining a diferen variable for the LYAMaker del sys.modules['desitarget.mock.mockmaker'] from desitarget.mock.mockmaker import LYAMaker LYA = LYAMaker(seed=seed,sqmodel='lya_simqso_model_develop',balprob=0.25) lyadata_continum = LYA.read(mockfile=mockfile,healpixels=healpixel, nside=nside) %time lyaflux_cont, lyawave_cont, lyatargets_cont, lyatruth_cont, lyaobjtruth_cont = LYA.make_spectra(lyadata_continum)
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
70a2f536471d84ce9c893ed87af9f122
Lets plot together some of the spectra with the old and new continum model
plt.figure(figsize=(20, 10)) indx=rand.choice(len(lyaflux),9) for i in range(9): plt.subplot(3, 3, i+1) plt.plot(lyawave,lyaflux[indx[i]],label="Old Continum") plt.plot(lyawave_cont,lyaflux_cont[indx[i]],label="New Continum") plt.legend()
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
40067c12fc5734ec0f03fe65f320a505
And finally we compare the colors, for the two runs with the new and old continum
plt.plot(lyatruth["FLUX_W1"],lyatruth_cont["FLUX_W1"]/lyatruth["FLUX_W1"]-1,'.') plt.xlabel("FLUX_W1") plt.ylabel(r"FLUX_W1$^{new}$/FLUX_W1-1") plt.plot(lyatruth["FLUX_W2"],lyatruth_cont["FLUX_W2"]/lyatruth["FLUX_W2"]-1,'.') plt.xlabel("FLUX_W2") plt.ylabel(r"(FLUX_W2$^{new}$/FLUX_W2)-1") plt.hist(lyatruth["FLUX_W1"],bins=100,label="Old Continum",alpha=0.7) plt.hist(lyatruth_cont["FLUX_W1"],bins=100,label="New Continum",histtype='step',linestyle='--') plt.xlim(0,100) #Limiting to 100 to see it better. plt.xlabel("FLUX_W1") plt.legend() plt.hist(lyatruth["FLUX_W2"],bins=100,label="Old Continum",alpha=0.7) plt.hist(lyatruth_cont["FLUX_W2"],bins=100,label="New Continum",histtype='step',linestyle='--') plt.xlim(0,100) #Limiting to 100 to see it better. plt.xlabel("FLUX_W2") plt.legend()
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
3d702f72dcc1b3d1b89f974b769e44bf
Conclusion: Colors are slightly affected by changing the continum model. To Finalize the LYA section, lets generate another set of spectra now including DLAs, metals, LYB, etc.
del sys.modules['desitarget.mock.mockmaker'] from desitarget.mock.mockmaker import LYAMaker ##Done in order to reload the desitarget, it doesn't seem to be enough with initiating a diferent variable for the LYAMaker class. LYA = LYAMaker(seed=seed,sqmodel='lya_simqso_model',balprob=0.25,add_dla=True,add_metals="all",add_lyb=True) lyadata_all= LYA.read(mockfile=mockfile,healpixels=healpixel, nside=nside) %time lyaflux_all, lyawave_all, lyatargets_all, lyatruth_all, lyaobjtruth_all = LYA.make_spectra(lyadata_all) plot_subset(lyawave_all, lyaflux_all, lyatruth_all, lyaobjtruth_all, xlim=(3500, 5500), targname='LYA')
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
cd4ccf64249394e9ffaf2167aff38f22
Demonstrate the other extragalactic target classes: LRG, ELG, and BGS. For simplicity let's write a little wrapper script that does all the key steps.
def demo_mockmaker(Maker, seed=None, nrand=16, loc='right'): TARGET = Maker(seed=seed) log.info('Reading the mock catalog for {}s'.format(TARGET.objtype)) tdata = TARGET.read(healpixels=healpixel, nside=nside) log.info('Generating {} random spectra.'.format(nrand)) indx = rand.choice(len(tdata['RA']), np.min( (nrand, len(tdata['RA'])) ) ) tflux, twave, ttargets, ttruth, tobjtruth = TARGET.make_spectra(tdata, indx=indx) log.info('Selecting targets') TARGET.select_targets(ttargets, ttruth) plot_subset(twave, tflux, ttruth, tobjtruth, loc=loc, targname=tdata['TARGET_NAME'], objtype=TARGET.objtype)
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
9bb64d877052738e459a6e9461b4f954
LRGs
from desitarget.mock.mockmaker import LRGMaker %time demo_mockmaker(LRGMaker, seed=seed, loc='left')
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
506523162c7bffe842ea712763599407
ELGs
from desitarget.mock.mockmaker import ELGMaker %time demo_mockmaker(ELGMaker, seed=seed, loc='left')
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
2959c2e028a526831e387578c34ed194
BGS
from desitarget.mock.mockmaker import BGSMaker %time demo_mockmaker(BGSMaker, seed=seed)
doc/nb/connecting-spectra-to-mocks.ipynb
desihub/desitarget
bsd-3-clause
628c09b8234468b9ba4011747c3c5e43