markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
hash
stringlengths
32
32
Fourier transform
from larch.xafs import xftf xftf(feo, kweight=2, kmin=2, kmax=13.0, dk=5, kwindow='Kaiser-Bessel')
notebooks/larch.ipynb
maurov/xraysloth
bsd-3-clause
ae36cd43adc35fe9436785a930efa8f2
Basic plots can be done directly with matplotlib. The command %matplotlib inline permits in-line plots, that is, images are saved in the notebook. This means that the figures are visible when the notebook is open, even without execution.
%matplotlib inline import matplotlib.pyplot as plt plt.plot(feo.energy, feo.mu) from larch.wxlib import plotlabels as plab plt.plot(feo.k, feo.chi*feo.k**2) plt.xlabel(plab.k) plt.ylabel(plab.chikw.format(2)) plt.plot(feo.k, feo.chi*feo.k**2, label='chi(k)') plt.plot(feo.k, feo.kwin, label='window') plt.xlabel(plab.k) plt.ylabel(plab.chikw.format(2)) plt.legend()
notebooks/larch.ipynb
maurov/xraysloth
bsd-3-clause
7c4d9a9ba1f2cdc4bb8769034dbcaa7a
A work-in-progress utility is available in sloth.utils.xafsplotter. It is simply a wrapper on top of the wonderful plt.subplots(). The goal of this utility is to produce in-line nice figures with standard layouts ready for reporting your analysis to colleagues. With little effort/customization, those plots could be converted to publication quality figures... Currently (September 2019), not much is available. To show the idea behind, previous plots are condensed in a single figure.
from sloth.utils.xafsplotter import XAFSPlotter p = XAFSPlotter(ncols=2, nrows=2, dpi=150, figsize=(6, 4)) p.plot(feo.energy, feo.mu, label='raw', win=0) p.plot(feo.energy, feo.i0, label='i0', win=0, side='right') p.plot(feo.energy, feo.norm, label='norm', win=1) p.plot(feo.k, feo.chi*feo.k**2, label='chi2', win=2) p.plot(feo.k, feo.chi*feo.k**2, label='chi(k)', win=3) p.plot(feo.k, feo.kwin, label='window', win=3) p.subplots_adjust(top=0.9) dir(feo)
notebooks/larch.ipynb
maurov/xraysloth
bsd-3-clause
7c4d6a2b94b0e95b5079a954a91bb0a4
Test interactive plot with wxmplot.interactive With the following commands is possible to open an external plotting window (based on Wxpython) permitting interactive tasks.
from wxmplot.interactive import plot plot(feo.energy, feo.mu, label='mu', xlabel='Energy', ylabel='mu', show_legend=True)
notebooks/larch.ipynb
maurov/xraysloth
bsd-3-clause
ea56a6986317600d5322066e192ec894
Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. This weight matrix is usually called the embedding matrix or embedding look-up table. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal. Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
train_graph = tf.Graph() with train_graph.as_default(): inputs = tf.placeholder(tf.int32, [None, None]) labels = tf.placeholder(tf.int32, [None, 1])
embeddings/Skip-Gram word2vec.ipynb
tkurfurst/deep-learning
mit
5b791e9d5b1c063da98b0ccec7f35fd3
Embedding The embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the one-hot vector with the embedding matrix, you end up selecting only one row out of the entire matrix: You don't actually need to do the matrix multiplication, you just need to select the row in the embedding matrix that corresponds to the input word. Then, the embedding matrix becomes a lookup table, you're looking up a vector the size of the hidden layer that represents the input word. <img src="assets/word2vec_weight_matrix_lookup_table.png" width=500> Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. This TensorFlow tutorial will help if you get stuck.
n_vocab = len(int_to_vocab) n_embedding = 200 # Number of embedding features with train_graph.as_default(): embedding = tf.variable(tf.truncated_normal(n_vocab, n_embedding)) # create embedding weight matrix here embed = tf.nn.embedding_lookup(embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output
embeddings/Skip-Gram word2vec.ipynb
tkurfurst/deep-learning
mit
52cbc814ad86137492c53a4890dad738
Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss. Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
# Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.variable(tf.truncated_normal(n_embedding,n_vocab, stdev=0.1) # create softmax weight matrix here softmax_b = tf.variable(tf.zeros(n_vocab)) # create softmax biases here # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab, name='sampled_softmax_loss') cost = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer().minimize(cost)
embeddings/Skip-Gram word2vec.ipynb
tkurfurst/deep-learning
mit
0d324f8c5f7009d04e7bbaf1fd303b69
Load data and define functions The rainfall and reference evaporation are read from file and truncated for the period 1980 - 2000. The rainfall and evaporation series are taken from KNMI station De Bilt. The reading of the data is done using Pastas. Heads are generated with a Gamma response function which is defined below.
rain = ps.read.read_knmi('data_notebook_5/etmgeg_260.txt', variables='RH').series evap = ps.read.read_knmi('data_notebook_5/etmgeg_260.txt', variables='EV24').series rain = rain['1980':'1999'] evap = evap['1980':'1999'] def gamma_tmax(A, n, a, cutoff=0.99): return gammaincinv(n, cutoff) * a def gamma_step(A, n, a, cutoff=0.99): tmax = gamma_tmax(A, n, a, cutoff) t = np.arange(0, tmax, 1) s = A * gammainc(n, t / a) return s def gamma_block(A, n, a, cutoff=0.99): # returns the gamma block response starting at t=0 with intervals of delt = 1 s = gamma_step(A, n, a, cutoff) return np.append(s[0], s[1:] - s[:-1])
examples/notebooks/8_pastas_synthetic.ipynb
gwtsa/gwtsa
mit
7558f453cf7a998438ce3f6eeacb99cf
The Gamma response function requires 3 input arguments; A, n and a. The values for these parameters are defined along with the parameter d, the base groundwater level. The response function is created using the functions defined above.
Atrue = 800 ntrue = 1.1 atrue = 200 dtrue = 20 h = gamma_block(Atrue, ntrue, atrue) * 0.001 tmax = gamma_tmax(Atrue, ntrue, atrue) plt.plot(h) plt.xlabel('Time (days)') plt.ylabel('Head response (m) due to 1 mm of rain in day 1') plt.title('Gamma block response with tmax=' + str(int(tmax)));
examples/notebooks/8_pastas_synthetic.ipynb
gwtsa/gwtsa
mit
139ca1866a995962c8ed56227a998215
Create Pastas model The next step is to create a Pastas model. The head generated using the Gamma response function is used as input for the Pastas model. A StressModel instance is created and added to the Pastas model. The StressModel intance takes the rainfall series as input aswell as the type of response function, in this case the Gamma response function ( ps.Gamma). The Pastas model is solved without a noise model since there is no noise present in the data. The results of the Pastas model are plotted.
ml = ps.Model(head) sm = ps.StressModel(rain, ps.Gamma, name='recharge', settings='prec') ml.add_stressmodel(sm) ml.solve(noise=False) ml.plots.results();
examples/notebooks/8_pastas_synthetic.ipynb
gwtsa/gwtsa
mit
40e2bbaf1747a58fbdacead374be5af7
Differences Between Linear Classifier and Linear Regression We start with loading a data that was created for this discussion and talk a about the differences between linear regression and linear classifier.
lc2_data = np.genfromtxt('./lc2_data.txt', delimiter=None) X, Y = lc2_data[:, :-1], lc2_data[:, -1] f, ax = plt.subplots(1, 2, figsize=(20, 8)) mask = Y == -1 ax[0].scatter(X[mask, 0], X[mask, 1], s=120, color='blue', marker='s', alpha=0.75) ax[0].scatter(X[~mask, 0], X[~mask, 1], s=340, color='red', marker='*', alpha=0.75) ax[0].set_xticklabels(ax[0].get_xticks(), fontsize=25) ax[0].set_yticklabels(ax[0].get_yticks(), fontsize=25) ax[1].scatter(X[:, 0], X[:, 1], s=120, color='black', alpha=0.75) ax[1].set_xticklabels(ax[1].get_xticks(), fontsize=25) ax[1].set_yticklabels(ax[1].get_yticks(), fontsize=25) plt.show()
week3/lc_and_perceptron.ipynb
sameersingh/ml-discussions
apache-2.0
118416b77d7e3ae6d521e76dc86128ce
Some of the questions that were asked in class by me or by the students. Make sure you know how to answer all of them :) If it's a linear classifier, and the blue and red are the differenc classes, how many features do we have here? How would a classifier line will look like if I plot it here? Give me a real life example. If it's a linear regression and you can ignore the colors, how many features are here? And a regression line? (The simple f(x) = ax + c one) Give me a real life example. Can I treat this problem as a regression if I tell you that the Y value is now 0 if it's blue and 1 if red? How many features do we have now? How would the regression line look like? Give me a real life example. My task if to answer 'See this new point? Should it be red or blue?' -- which one do I need? My task is now to answer 'what will be the value of a new point at 8.3?' -- which one do I need now? How about 'I know the value of 'Y' is 4.7, what was the value of X?' So how would a test data look like for the classification problem? And how would it look like for the regression problem? Building a Classifier from the Ground Up In the rest of the discussion we will show how to code a classifier from the ground up. This will be extremely useful not only for your homework assignment but also for future references. Most ML coding tend to be similar to one another so this will be reusable even in super complicated models. Perceptron Algorithm As a simple example we will use the Perceptron Algorithm. We will build each part seperately, showing how it works and end by wrapping it all up in a classifier class that can be used with the mltools library. We will use a 2 classes Perceptron with classes ${-1, 1}$. In the discussion you can also see how to use a binary classes ${0, 1}$ and in the wiki page you can see a generalization to multiple classes. For an illustration of the algorithm you can watch this YouTube clip Decision Boundry and Classification The Perceptron used a decidion boundry $\theta$ to compute a value of each point. Then with a simple sign threshold decides on the class. We'll start by computing the decision value for each point $x^j$: $$\theta x^j$$ Let's choose $j=90$ and let's define: $$\theta = \left[-6, 0.5, 1\right]$$
theta = np.array([-6., 0.5, 1.])
week3/lc_and_perceptron.ipynb
sameersingh/ml-discussions
apache-2.0
4260291024b7f3faae16381e0866fdbb
Notice the '.'s. This will make sure it's a float and not integer which can casue problems later down the line. $\theta$ has three features that will correspond to the constant (also known as the 'bias' or 'intercept') and two for the two features of X. So first we will add a constant to all the X data. Do not use the fpoly to do that, the behavior of that function is unexpected when there is more than one feature.
def add_const(X): return np.hstack([np.ones([X.shape[0], 1]), X]) Xconst = add_const(X) x_j, y_j = Xconst[90], Y[90]
week3/lc_and_perceptron.ipynb
sameersingh/ml-discussions
apache-2.0
8ed0dcc6c08980c847c9c687b4686e9d
Response Value The first step in the preceptron is to compute the response value. It's comptued as the inner product $\theta x^j$. The simple intuative way to do that is to simply use a for loop.
x_theta = 0 for i in range(x_j.shape[0]): x_theta += x_j[i] * theta[i] print x_theta
week3/lc_and_perceptron.ipynb
sameersingh/ml-discussions
apache-2.0
58a66bb28fbc5bf4bcadd8ec59d3eb32
This is a VERY inefficient way to do that. Luckily for us, numpy has the answer in the form of np.dot().
print np.dot(x_j, theta)
week3/lc_and_perceptron.ipynb
sameersingh/ml-discussions
apache-2.0
545a5ca66400d60d2181c6628c6075b5
Classification Decision Now let's compute the decision classification $T[\theta x^j]$. One option is to use the np.sign method. This will not a a good solution because np.sign(0) = 0. One way of solving it is to use epsilon.
eps = 1e-200 def sign(vals): """Returns 1 if val >= 0 else -1""" return np.sign(vals + eps)
week3/lc_and_perceptron.ipynb
sameersingh/ml-discussions
apache-2.0
9dc9a533c9608f6229234e5c34e89d04
Predict function So now with the the decision value and my_sign we can write the predict function
def predict(x_j, theta): """Returns the class prediction of a single point x_j""" return sign(np.dot(x_j, theta)) print predict(x_j, theta)
week3/lc_and_perceptron.ipynb
sameersingh/ml-discussions
apache-2.0
d80d6585aa8c1328ff9800053abe1c63
Predict multiple During the discussions I brought up that for some methods of computing the inner product (such as np.sum()) will not work for multiple points at the same time unless you take steps to make it work.
def predict_with_np_sum(X, theta): """Predicts the class value for multiple points or a single point at the same time. """ X = np.atleast_2d(X) return np.sum(theta * X, axis=1)
week3/lc_and_perceptron.ipynb
sameersingh/ml-discussions
apache-2.0
53d9cf49f2481ce23e05027ffcd6b926
Computing the Prediction Error Using the predict function, we can now compute the prediction error: $$J^j = (y^j - \hat{y}^j)$$
def pred_err(X, Y, theta): """Predicts that class for X and returns the error rate. """ Yhat = predict(X, theta) return np.mean(Yhat != Y) print pred_err(x_j, y_j, theta)
week3/lc_and_perceptron.ipynb
sameersingh/ml-discussions
apache-2.0
c5152860144ac79b507ee3a4d93c3a2f
Learning Update Using the error we can now even do the update step in the learning algorithm: $$\theta = \theta + \alpha * (y^j - \hat{y}^j)x^j$$
a = 0.1 y_hat_j = predict(x_j, theta) print theta + a * (y_j - y_hat_j) * x_j
week3/lc_and_perceptron.ipynb
sameersingh/ml-discussions
apache-2.0
4bec04f617da8231e652cd9270fcd6fe
Train method Using everything we coded so far, we can fully create the train method
def train(X, Y, a=0.01, stop_tol=1e-8, max_iter=1000): # Start by adding a const Xconst = add_const(X) m, n = Xconst.shape # Initializing theta theta = np.array([-6., 0.5, 1.]) # The update loops J_err = [np.inf] for i in range(1, max_iter + 1): for j in range(m): x_j, y_j = Xconst[j], Y[j] y_hat_j = predict(x_j, theta) theta += a * (y_j - y_hat_j) * x_j curr_err = pred_err(Xconst, Y, theta) J_err.append(curr_err) if np.abs(J_err[-2] - J_err[-1]) < stop_tol: print 'Reached convergance after %d iterations. Prediction error is: %.3f' % (i, J_err[-1]) break return theta theta_trained = train(X, Y)
week3/lc_and_perceptron.ipynb
sameersingh/ml-discussions
apache-2.0
351fb8eb2ce29fbea30f4f4951a5718a
Creating a Perceptron Classifier Now let's use all the code that we wrote and create a Python class Perceptron that can plug in to the mltools package. In order to do that, the Prceptron class has to inherit the object mltools.base.classifier In case you haven't looked at the actual code in the mltools, now will probably be the right time.
from mltools.base import classifier
week3/lc_and_perceptron.ipynb
sameersingh/ml-discussions
apache-2.0
ccf08dfbf60ff1516f5c0cd9026a1899
In order to crete an object, we'll have to add self to all the methods.
class Perceptron(classifier): def __init__(self, theta=None): self.theta = theta def predict(self, X): """Retruns class prediction for either single point or multiple points. """ # I'm addiing this stuff here so it could work with the plotClassify2D method. Xconst = np.atleast_2d(X) # Making sure it has the const, if not adding it. if Xconst.shape[1] == self.theta.shape[0] - 1: Xconst = add_const(Xconst) return self.sign(np.dot(Xconst, self.theta)) def sign(self, vals): """A sign version with breaking 0's as +1. """ return np.sign(vals + 1e-200) def pred_err(self, X, Y): Yhat = self.predict(X) return np.mean(Yhat != Y) def train(self, X, Y, a=0.02, stop_tol=1e-8, max_iter=1000): # Start by adding a const Xconst = add_const(X) m, n = Xconst.shape # Making sure Theta is inititialized. if self.theta is None: self.theta = np.random.random(n) # The update loops J_err = [np.inf] for i in range(1, max_iter + 1): for j in range(m): x_j, y_j = Xconst[j], Y[j] y_hat_j = self.predict(x_j) self.theta += a * (y_j - y_hat_j) * x_j curr_err = self.pred_err(Xconst, Y) J_err.append(curr_err) if np.abs(J_err[-2] - J_err[-1]) < stop_tol: print 'Reached convergance after %d iterations. Prediction error is: %.3f' % (i, J_err[-1]) break
week3/lc_and_perceptron.ipynb
sameersingh/ml-discussions
apache-2.0
ad3c3b6a47a6cc59a8fb9b2a70c0006c
Creating a model, training and plotting predictions First let's create the model with some initialized theta and plot the decision bounderies. For the plotting we can use the mltools plotClassify2D !!! wowowowo!!!!
model = Perceptron() model.theta = np.array([-6., 0.5, 1]) ml.plotClassify2D(model, X, Y)
week3/lc_and_perceptron.ipynb
sameersingh/ml-discussions
apache-2.0
7446d27c469a897ba453c91d6831a265
Next, let's actually train the model and plot the new decision boundery.
model.train(X, Y) ml.plotClassify2D(model, X, Y)
week3/lc_and_perceptron.ipynb
sameersingh/ml-discussions
apache-2.0
96024987cc3366217e99e5e837fa3512
Gilles.py is the file that contains the important functions, we will go through it to understand the main differences between the deterministic and stochastic solution, but first let's see some examples!
%run Gilles.py
ReAct/Python/ReAct_Notebook.ipynb
manulera/ModellingCourse
gpl-3.0
83d01c1b86fba07886d4cbd8835fec71
Here we can see some examples for the use of ReAct
%run 'Example1_oscillations.py' PrintPythonFile('Example1_oscillations.py')
ReAct/Python/ReAct_Notebook.ipynb
manulera/ModellingCourse
gpl-3.0
96fa55c26681d0a36c611de86e21a6f6
Is this oscilatory effect only? If we change the number of molecules of A from 100 to 1000 what do we see? How could we quantify the relevance of this oscilations with respect to the equilibrium?
%run 'Example2_Ask4Oscillations.py' PrintPythonFile('Example2_Ask4Oscillations.py')
ReAct/Python/ReAct_Notebook.ipynb
manulera/ModellingCourse
gpl-3.0
c8cc1557ad6efda5e89c3046f2a2b276
You can copy the content of the file into a new cell, and change the values, explore how the parameters affect the outcome using the cell below.
# Initial conditions user_input = ['A', 100, 'B', 0] # Constants (this is not necessary, they could be filled up already in the reaction tuple) k = (12,8) # Reaction template ((stoch_1,reactant_1,stoch_2,reactant_2),(stoch_1,product_1,stoch_2,product_2),k) reactions = ( (1,'A'),(1,'B'),k[0], (1,'B'),(1,'A'),k[1], ) # dt is used for the deterministic calculation, and the dt=0.0001 t = np.arange(0, 4, dt) (solution,(tgill, valsgill, _, _),rows,mode)=ReAct(user_input,reactions,t) Gillesplot(solution,t,tgill, valsgill,rows,mode) plt.show()
ReAct/Python/ReAct_Notebook.ipynb
manulera/ModellingCourse
gpl-3.0
82f19f5c87536030df98abdbae157ee7
Now, let's look at a maybe more relevant situation for biologists, the already mentioned MAP kinase cascade. <img src="Images/miniMAP.png" style="width: 200px;"/> Kinase cascades are known for amplifying the signal: a minor change in the cell, for example, a transient activation of a small number of receptors, is amplified by the cascade and results in major changes in the cell state. Have a look at the example below, do we see this effect? The first graph is a bit crowded, so we can choose to plot only the most relevant species for us. The second graph shows how the Map1K is strongly amplified, explore how the parameters (initial concentrations and kynetic constants) affect the outcome of the response in the cell below. Try to find a link with the explained role of kinase cascades.
%run 'Example3_KyneticCascade.py' PrintPythonFile('Example3_KyneticCascade.py')
ReAct/Python/ReAct_Notebook.ipynb
manulera/ModellingCourse
gpl-3.0
3958d7747806817403f347bf1e7c9e02
Explore how the parameters (initial concentrations and kynetic constants) affect the outcome of the response in the cell below. Try to find a link with the explained role of kinase cascades.
import numpy as np from Gilles import * import matplotlib.pyplot as plt # Initial conditions user_input = ['Rec', 10, '1M3', 10, '1M3P', 0, '1M2', 20, '1M2P', 0, '1M1', 30, '1M1P', 0] # Constants (this is not necessary, they could be filled up already in the reaction tuple) k = (2,0.05,1,0.5,1,0.5,1) # Reaction template ((stoch_1,reactant_1,stoch_2,reactant_2),(stoch_1,product_1,stoch_2,product_2),k) reactions = ( (1,'Rec'),(),k[0], (-1,'Rec',1,'1M3'),(1,'1M3P'),k[1], (1,'1M3P'),(1,'1M3'),k[2], (-1,'1M3P',1,'1M2'),(1,'1M2P'),k[3], (1,'1M2P'),(1,'1M2'),k[4], (-1, '1M2P', 1, '1M1'), (1, '1M1P'), k[5], (1, '1M1P'), (1, '1M1'), k[6], ) # dt is used for the deterministic calculation, and the dt=0.00001 t = np.arange(0, 10, dt) (solution,(tgill, valsgill, _, _),rows,mode)=ReAct(user_input,reactions,t) Gillesplot(solution,t,tgill, valsgill,rows,mode) plt.figure() Gillesplot(solution,t,tgill, valsgill,rows,mode,['Rec','1M3P','1M2P','1M1P']) plt.show()
ReAct/Python/ReAct_Notebook.ipynb
manulera/ModellingCourse
gpl-3.0
93d1cf7ac7dbfba173944ccba5f57536
The predator-pray model Also known as Lotka–Volterra equations: <img src="Images/Lotka_volterra.svg" style="width: 150px;"/> Where, x is the number of preys , and y is the number of predators. Before looking at the next cell, how would you write these equations as a chemical reaction? What does each reaction represent
%run 'Example_PredatorPray.py' PrintPythonFile('Example_PredatorPray.py')
ReAct/Python/ReAct_Notebook.ipynb
manulera/ModellingCourse
gpl-3.0
8a3248ef0c147334e5a5dcbf4a386cf5
2. Create Groups (named variables that hold your replicates of each sample) You must assign your raw files into experimental groups for analysis. These are used for downstream statistics and for selection of specific groups for filtering to subsets of files for analysis (Ex. just pos or just neg). The groups are created from common file headers and the unique group names. The convention our lab group uses for filenames is as follows: DATE_NORTHENLABINITIALS_COLLABINITIALS_PROJ_EXP_SAMPSET_SYSTEM_COLUMN-method_SERIAL_POL_ACQ_SAMPLENUMBER_ SAMPLEGROUP_REP_OPTIONAL_SEQ Ex.:20180105_SK_AD_ENIGMA_PseudoInt_R2ADec2017_QE119_50454_123456_POS_MSMS_001_Psyringae-R2A-30C-20hr_Rep01_NA_Seq001.raw The common header consists of the fields 0-10: DATE_NORTHENLABINITIALS_COLLABINITIALS_PROJ_EXP_SAMPSET_SYSTEM_COLUMN-method_SERIAL_POL_ACQ The sample group name is commonly field # 12 (between underscore 11 and 12) -0 indexed- Find your files On the first line of the block below, set the 'experiment' and 'name' variables to find your files. These fields require wildcards for partial string searches 'Experiment' is the folder name within global/project/projectdirs/metatlas/raw_data, that will be emailed to you when the files are uploaded to NERSC. You can also look in the raw_data directory for the NERSC user who uploaded your files; your experiment folder should be in there. 'name' is string that will match a subset of your files within that folder.
files = dp.get_metatlas_files(experiment = '%ENTERSTRING%',name = '%ENTERSTRING%',most_recent = True) # ^ edit the text string in experiment and name fields df = metob.to_dataframe(files) df[['experiment','name','username','acquisition_time']] len(files)
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
edbfe8b7a057b42a8de1c2d808dc4cf6
OPTION A: Automated Group Maker This will attempt to create groups in an automated fashion (rather than filling out a spreadsheet with a list of files and group names). If your files are all in one folder at nersc, you can use this options. If not, use option B below. A long group name consisting of the common header + either controlled vocab value or field #12 along with a short group name (just controlled vocab or field #12) will be stored in a local variable. The short group names can be used on plots. STEP 1: View the groups Pick an experiment folder to look for files in on the metob.retrieve function Enter controlled vocabulary for control files to put select files into groups when control string may be in a different field (not #12) or as a randomly placed substring within a field (ex. if 'InjBl' is included in your controlled vocab list, files like InjBl-MeOH and StartInjBl will group together) If your group name is not between _ 11 and 12 you can adjust those values in the split commands below. All other (non-controlledvocab) groups will be created from that field. STEP 2: Create the groups variable after checking the output from STEP 1 STEP 3: <br /> Option A: If everything looks fine the group names and short names, Store groups once you know you have files in correct groups by running and checking the output of STEPS 1 and 2.<br /> Option B (optional): If you would like to edit the groups, uncomment the options B-I and B-II. Run Option B-I to export a prefilled tab infosheet. Edit the file and then run Option B-II to import the new groups and save it.
#STEP 1: View the groups files = metob.retrieve('lcmsruns',experiment='%ENTERSTRING%',username='*') controlled_vocab = ['QC','InjBl','ISTD'] #add _ to beginning. It will be stripped if at begining version_identifier = 'vs1' exclude_files = [] # Exclude files containing a substring (list) Eg., ['peas'] file_dict = {} groups_dict = {} for f in files: if not any(map(f.name.__contains__, exclude_files)): k = f.name.split('.')[0] # get index if any controlled vocab in filename indices = [i for i, s in enumerate(controlled_vocab) if s.lower() in k.lower()] prefix = '_'.join(k.split('_')[:11]) if len(indices)>0: short_name = controlled_vocab[indices[0]].lstrip('_') group_name = '%s_%s_%s'%(prefix,version_identifier,short_name) short_name = k.split('_')[9]+'_'+short_name # Prepending POL to short_name else: short_name = k.split('_')[12] group_name = '%s_%s_%s'%(prefix,version_identifier,short_name) short_name = k.split('_')[9]+'_'+k.split('_')[12] # Prepending POL to short_name file_dict[k] = {'file':f,'group':group_name,'short_name':short_name} groups_dict[group_name] = {'items':[],'name':group_name,'short_name':short_name} df = pd.DataFrame(file_dict).T df.index.name = 'filename' df.reset_index(inplace=True)#['group'].unique() df.drop(columns=['file'],inplace=True) for ug in groups_dict.keys(): for file_key,file_value in file_dict.items(): if file_value['group'] == ug: groups_dict[ug]['items'].append(file_value['file']) df.head(100) #STEP 2: create the groups variable, if the above looks OK groups = [] for group_key,group_values in groups_dict.items(): g = metob.Group(name=group_key,items=group_values['items'],short_name=group_values['short_name']) groups.append(g) for item in g.items: print(g.name,g.short_name,item.name) print('') # STEP 3 Option A: store the groups variable content in the DB (currently only the long group name is stored) metob.store(groups) ## STEP 3 Option B-I: OPTIONAL: Export groups to csv file for editing (filename, short_name, group, description) #dp.make_prefilled_fileinfo_sheet(groups,os.path.join(output_dir,'prefilled_fileinfo.tab')) ## STEP 3 Option B-II: Import groups from csv file after editing the prefilled_fileinfo.tab #groups = dp.make_groups_from_fileinfo_sheet(os.path.join(output_dir,'prefilled_fileinfo.tab'), filetype='tab', store=True)
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
455cfc3bbdfcac704644d05494f2a718
Make data frame of short filenames and samplenames Uncomment the below 2 blocks to make short file names and smaple names.<br> This creates a dataframe and a csv file which can be edited, exported and imported.
# Make short_filename and short_samplename files = metob.retrieve('lcmsruns',experiment='%ENTERSTRING%',username='*') short_filename_delim_ids = [0,2,4,5,7,9,14] short_samplename_delim_ids = [9,12,13,14] short_names_df = pd.DataFrame(columns=['sample_treatment','short_filename','short_samplename']) ctr = 0 for f in files: short_filename = [] short_samplename = [] tokens = f.name.split('.')[0].split('_') for id in short_filename_delim_ids: short_filename.append(str(tokens[id])) for id in short_samplename_delim_ids: short_samplename.append(str(tokens[id])) short_filename = "_".join(short_filename) short_samplename = "_".join(short_samplename) short_names_df.loc[ctr, 'full_filename'] = f.name.split('.')[0] short_names_df.loc[ctr, 'sample_treatment'] = str(tokens[12]) # delim 12 short_names_df.loc[ctr, 'short_filename'] = short_filename short_names_df.loc[ctr, 'short_samplename'] = short_samplename short_names_df.loc[ctr, 'last_modified'] = pd.to_datetime(f.last_modified,unit='s') ctr +=1 short_names_df.sort_values(by='last_modified', inplace=True) short_names_df.drop(columns=['last_modified'], inplace=True) short_names_df.drop_duplicates(subset=['full_filename'], keep='last', inplace=True) short_names_df.set_index('full_filename', inplace=True) short_names_df.to_csv(os.path.join(output_dir, 'short_names.csv'), sep=',', index=True) # Optional import edited short_names.csv short_names_df = pd.read_csv(os.path.join(output_dir, 'short_names.csv'), sep=',', index_col='full_filename')
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
078d0dbeb63c016cabb065fca5571a17
3. Select groups of files to operate on Here, you will assign your database groups to a local variable which will be used downstream in the notebook for analyzing your data with an atlas. in block below, fill out the fields for name, include_list and exclude_list using text strings from the group names you created in the previous step. The include/exlcude lists do not need wildcards. Name is a string unique to all of your groups (ex. fields 0-11 of your filenames) Typically, you will run one polarity at a time.
polarity = 'POS' #IMPORTANT: Please make sure you set the correct polarity for the analysis groups = dp.select_groups_for_analysis(name = '%ENTERSEARCHSTRING%', # <- edit text search string here most_recent = True, remove_empty = True, include_list = [], exclude_list = ['NEG','QC','InjBl'])# ex. ['QC','Blank']) print("sorted groups") groups = sorted(groups, key=operator.attrgetter('name')) for i,a in enumerate(groups): print(i, a.name) # to view metadata about your groups, run the block below metob.to_dataframe(groups)
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
b657e49d691e88465c3b340af2641004
4. Create new Atlas entries in the Metatlas database from a csv file QC, IS, and EMA template atlases are available on the google drive. Create your atlas as a csv file, check that it looks correct (has all the correct headers and no blank values in rows; all columns are the correct data type Save it with the type of atlas (EMA, QC or IS), your initials, the experiment name, the polarity, and the version or timestamp Upload it to your nersc project directory (the one you named above). (If it doesn't work, double check your file permissions are set to at least rw-rw----). Run blocks below to create the DB entries for negative and positive mode atlases WARNING: Don't run this block over and over again - it will create multiple new DB entries with the same atlas name Required Atlas file headers: inchi_key,label,rt_min,rt_max,rt_peak,mz,mz_tolerance,adduct,polarity,identification_notes values in rows must be completed for all fields except inchi_key (leaving this blank will not allow you to perform MSMS matching below), and identification notes INFO: store=True will register your atlas in the database. If you are not sure if your atlas structure is correct, set store=False for the first time your run the block to check if you get an error. If there is no error, then rerun it with store=True. NEGATIVE MODE ATLAS UPLOAD
atlasfilename='%ENTERSTRING%' # <- enter the exact name of your csv file without the file extension names = dp.make_atlas_from_spreadsheet('%s%s%s' % (pathtoatlas,atlasfilename,'.csv'), # <- DO NOT EDIT THIS LINE atlasfilename, filetype='csv', sheetname='', polarity = 'negative', store=True, mz_tolerance = 12 )
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
49430325bca84a39115fcdfb58805039
POSITIVE MODE ATLAS UPLOAD
atlasfilename='%ENTERSTRING%' # <- enter the exact name of your csv file without the file extension names = dp.make_atlas_from_spreadsheet('%s%s%s' % (pathtoatlas,atlasfilename,'.csv'), # <- DO NOT EDIT THIS LINE atlasfilename, filetype='csv', sheetname='', polarity = 'positive', store=True, mz_tolerance = 12 )
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
259f578197351ee8192db3f3dde35359
5. Select Atlas to use The first block will retrieve a list of atlases matching the 'name' string that you enter. Also, you must enter your username. The next block will select one from the list, using the index number. Make sure to enter the index number for the atlas you want to use for your analysis by setting in this line: my_atlas = atlases[0]
atlases = metob.retrieve('Atlas',name='%ENTERSTRING%',username='YOUR-NERSC-USERNAME') names = [] for i,a in enumerate(atlases): print(i,a.name,pd.to_datetime(a.last_modified,unit='s'))#len(a.compound_identifications) my_atlas = atlases[-1] atlas_df = ma_data.make_atlas_df(my_atlas) atlas_df['label'] = [cid.name for cid in my_atlas.compound_identifications] print(my_atlas.name) metob.to_dataframe([my_atlas]) # the first line of the output will show the dimensions of the atlas dataframe # OPTIONAL: to view your atlas, run this block print(my_atlas.name) atlas_df
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
8b5ae0cff57afa253bcc1069c5feb3cb
6. Get EICs and MSMS for all files in your groups, using all compounds in your atlas. This block builds the metatlas_dataset variable. This holds your EIC data (mz, rt, intensity values within your mz and rt ranges). The EIC data contains mz, intensity and RT values across your RT range. There are two parameters that you will need to edit: extra_time and extra_mz. Extra time will collect mz, intensity and RT values from outside of your atlas defined min and max rt values. For example if your rt_min = 1.0, and rt_max = 2.0 and you set extra_time to 0.3, then your new rt range will be 0.7 to 2.3. This is helpful for checking if you have nearby peaks at the same m/z. Extra_mz should only be used for troubleshooting. You should keep this at 0 unless you believe you have poor mass accuracy during your run. Other ways to address this issue is by changing the mz_tolerance values in your atlas. Before changing this value, you should check in with a metatlas experienced lab member to discuss when/how to use this value. Change the value in "extra_time = 0.0" to something like 0.5 to 1.0 for the first EMA runthrough on your files. This will take longer but collect msms outside your retention windows which allows you to check the msms of nearby peaks before adjusting your rt bounds around the correct peak. extra_mz should almost always be set to 0.0 If you need to troubleshoot a low mz compound you could potentially use this value to run it back through with a larger mz error window than what was specified in your atlas (ppm tolerance). On Your final runthrough, set extra_time to 0
all_files = [] for my_group in groups: for my_file in my_group.items: extra_time = 0.75 # NOTE: 0.75 for the first run, 0.5 for final extra_mz = 0.00 all_files.append((my_file,my_group,atlas_df,my_atlas,extra_time,extra_mz)) pool = mp.Pool(processes=min(4, len(all_files))) t0 = time.time() metatlas_dataset = pool.map(ma_data.get_data_for_atlas_df_and_file, all_files) pool.close() pool.terminate() print(time.time() - t0) # Make data sources tables (atlas_metadata.tab, groups_metadata.tab, groups.tab and [atlasname]_originalatlas.tab within data_sources subfolder) ma_data.make_data_sources_tables(groups, my_atlas, output_dir)
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
74065532f2ef9d950c940c33c457cc20
6b Optional: Filter atlas for compounds with no or low signals Uncomment the below 3 blocks to filter the atlas. Please ensure that correct polarity is used for the atlases.
# dp = reload(dp) # num_data_points_passing = 5 # peak_height_passing = 4e5 # atlas_df_passing = dp.filter_atlas(atlas_df=atlas_df, input_dataset=metatlas_dataset, num_data_points_passing = num_data_points_passing, peak_height_passing = peak_height_passing) # print("# Compounds in Atlas: "+str(len(atlas_df))) # print("# Compounds passing filter: "+str(len(atlas_df_passing)))
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
863f992f822fe8f56c69b3fc1b95b3c4
Create new atlas and store in database This block creates a filtered atlas with a new name !! Automatically selects this atlas for processing. Make sure to use this atlas for downstream analyses. (NOTE: If you restart kernel or come back to the analysis, you need to reselect this newly created filtered atlas for processing)
# atlas_passing = my_atlas.name+'_filteredby-datapnts'+str(num_data_points_passing)+'-pkht'+str(peak_height_passing) # myAtlas_passing = dp.make_atlas_from_spreadsheet(atlas_df_passing, # atlas_passing, # filetype='dataframe', # sheetname='', # polarity = 'positive', # store=True, # mz_tolerance = 12) # atlases = dp.get_metatlas_atlas(name=atlas_passing,do_print = True, most_recent=True) # myAtlas = atlases[-1] # atlas_df = ma_data.make_atlas_df(myAtlas) # atlas_df['label'] = [cid.name for cid in myAtlas.compound_identifications] # print(myAtlas.name) # print(myAtlas.username) # metob.to_dataframe([myAtlas])# # all_files = [] # for my_group in groups: # for my_file in my_group.items: # all_files.append((my_file,my_group,atlas_df,myAtlas)) # pool = mp.Pool(processes=min(4, len(all_files))) # t0 = time.time() # metatlas_dataset = pool.map(ma_data.get_data_for_atlas_df_and_file, all_files) # pool.close() # pool.terminate() # #If you're code crashes here, make sure to terminate any processes left open. #(print time.time() - t0)
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
1c0e7d23c223ab3bcbbe19e2afd65fdc
One of the two blocks below builds the hits variable. This holds your MSMS spectra (from within your mz, and rt ranges, and within the extra time indicated above). There are two options for generating the hits variable: 1. block A: use when your files have msms. It create the hits variable and also saves a binary (pickled) serialized hits file to the output directory. 2. block B: only run if your files were collected in MS1 mode 3. If you have already run block A and then the kernel dies, you can skip block A and directly unplickle the binary hits file from the output directory. Skip block A, uncomment the Optional block and run it.
##BLOCK A import warnings; warnings.simplefilter('ignore') t0 = time.time() hits=dp.get_msms_hits(metatlas_dataset,extra_time=True,keep_nonmatches=True, frag_mz_tolerance=0.01, ref_loc='/global/project/projectdirs/metatlas/projects/spectral_libraries/msms_refs_v3.tab') pickle.dump(hits, open(os.path.join(output_dir,polarity+'_hits.pkl'), "wb")) print(time.time() - t0) print('%s%s' % (len(hits),' <- total number of MSMS spectra found in your files')) ## BLOCK B (uncomment lines below to run this. Only use when all data files are MS1) #hits=pd.DataFrame([], columns=['database','id','file_name','msms_scan', u'score', u'num_matches', u'msv_query_aligned', u'msv_ref_aligned', u'name', u'adduct', u'inchi_key', u'precursor_mz', u'measured_precursor_mz']) #hits.set_index(['database','id','file_name','msms_scan'], inplace=True) # Optional: If you already have a pickled hits file and do not need to run get_msms_hits again, uncomment this block # hits = pickle.load(open(os.path.join(output_dir,polarity+'_hits.pkl'), "rb"))
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
278cd32d50e3f4372d62d973cdc58175
7. Adjust Retention Times. This block creates an interactive plot. The top panel displays MSMS from within the two green RT bounds selected below (rt min and max, initially set in atlas). When the database holds reference spectra, mirror plots are generated with the reference spectra inverted below the sample spectra. The lower panel displays the EICs overlayed for all of the files in your selected groups. You can highlight your groups different colors. It is recommended that you do this, at least, for your extraction blank (or if not available, use a solvent injection blank). This plot also displays radio buttons that can be interactively selected; the values will be exported in your final identifications table and in your atlas export. Use these to mark peak/MSMS quality. How to use: 1. STEP 1: Set peak flag radio buttons 1. OPTION A (custom flags): fill out the peak flags list (list of strings) peak_flag_list = ('A','B') some recommendations are below. 2. OPTION B (default flags): comment out the custom peak_flag_list line. Uncomment the default peak_flags = "". Flags default to: keep, remove, unresolvable isomers, check. 2. STEP 2: Set EIC colors 1. Option A (custom EIC colors): fill out the colorlist in the format of below colorlist = [['color1nameorhexadec','partialgroupstring1'], ['color2nameorhexadec','partialgroupstring2']] &lt;ul&gt;&lt;li&gt;You can add more comma delimited colors/groups as needed.&lt;/li&gt; &lt;li&gt;These are partial strings that match to you file names (not your group names).&lt;/li&gt; &lt;li&gt;The order they are listed in your list is the order they are displayed in the overlays (first is front, last is back)&lt;/li&gt; &lt;li&gt;Named colors available in matplotlib are here: https://matplotlib.org/3.1.0/gallery/color/named_colors.html or use hexadecimal values '#000000'&lt;/li&gt;&lt;/ul&gt; B. Option B (default EIC colors): comment out the custom colorlist lines and uncomment the default colorlist = "". Colors all default to black. User the right/left buttons on your keyboard to cycle through compounds in your atlas. Use the up/down buttons on your keyboard to cycle through MSMS spectra within the RT bounds of the lower plot. Use the horizontal rt min and rt max bars below the plots to adjust the rt bounds around your peak. If there are multiple peaks, select one at a time and then click up/down to update the msms available in that new RT range. If necessary evaluate your data in an external program such as mzmine to make sure you are selecting the correct peak. TIPS: use compound_idx = 0 in step 3 to change to a different compound in your atlas using the index number. If your plot does not fit in your browser window, adjust height and width values. Use alpha to change the transparency of the lines this is a value 0 (transparent) to 1 (opaque). DO NOT change your RT theoretical peak (the purple line). It is locked from editing (unless you change a hidden parameter) and only to be changed in special cases. The measured retention times of your peaks will be calculated and exported in your output files. These will be compared with the RT theoreticals and used in your evidence of identification table.
###STEP 1: Set the peak flag radio buttons using one of the two lines below, for custom flags or default flags import warnings; warnings.simplefilter('ignore') peak_flag_list=('','L1+ - 1 pk, good RT&MSMS','L1+ - known isomer overlap','L1+ - 1 pk, good RT, MSMS ok (coisolated mz/partial match/low int)','L1+ - 1 pk, good RT&MSMS from external library','L1 - 1 pk, correct RT, no MSMS or int too low for matching','L1 - 1 pk, good RT, very low intensity/poor pk shape','L2 put comp','L3 putative class','Remove - background/noise','Remove - bad EMA MSMS','Remove - bad MSMS NIST/MONA/Metlin') msms_flags_list = "" # #peak_flag_list ="" # this will default to ('keep','remove','unresolvable isomers','poor peak shape') ###STEP 2: Set the EIC line colors using on of the two lines below, for custom colors or default colorlist= [['red','ExCtrl'], ['green','TxCtrl'], ['blue','InjBl']] #colorlist="" # this will default to black ###STEP 3 a = dp.adjust_rt_for_selected_compound(metatlas_dataset, msms_hits=hits, peak_flags=peak_flag_list, msms_flags=msms_flags_list, color_me = colorlist, compound_idx=0,alpha=0.5,width=15,height=4.5)
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
8150607140ba0bf9628c7df38d33f177
8. Create filtered atlas excluding compounds marked removed Re-run the following before filtering atlas 1. Get Groups (include InjBl) 2. Get Atlas 3. Get Data 4. Get MSMS Hits
dp=reload(dp) (atlas_kept, atlas_removed) = dp.filter_by_remove(atlas_df, metatlas_dataset) print("# Compounds Total: "+str(len(atlas_df))) print("# Compounds Kept: "+str(len(atlas_kept))) print("# Compounds Removed: "+str(len(atlas_removed))) atlasfilename=my_atlas.name+'_kept' # <- enter the name of the atlas to be stored names = dp.make_atlas_from_spreadsheet(atlas_kept, atlasfilename, # <- DO NOT EDIT THIS LINE filetype='dataframe', sheetname='', polarity = 'positive', store=True, mz_tolerance = 12 )
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
7273100a579edaa3ff9175f67abba55d
Re-run the following before filtering atlas Restart kernel Get Groups Get Atlas (look for the *_kept atlas) Get Data Get MSMS Hits 9. Export results files Export Atlas to a Spreadsheet The peak flags that you set and selected from the rt adjuster radio buttons will be saved in a column called id_notes
atlas_identifications = dp.export_atlas_to_spreadsheet(my_atlas,os.path.join(output_dir,'%s_%s%s.csv' % (polarity,my_atlas.name,"export"))) print(my_atlas.name)
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
d64aa8457bdd084a6f0e5b0a9e10ffcb
Export MSMS match scores, stats sheets, and final identification table This block creates a number of files: compound_scores.csv stats_table.tab filtered and unfiltered peak heights, areas, msms scores, mz centroid, mz ppm error, num of fragment matches, rt delta, rt peak final identification sheet that is formatted for use as a supplemental table for manuscript submission. You will need to manually complete some columns. Please discuss with Ben, Katherine, Daniel or Suzie before using for the first time. THe kwargs below will set the filtering points for the parameters indicated.
kwargs = {'min_intensity': 1e4, # strict = 1e5, loose = 1e3 'rt_tolerance': .5, #>= shift of median RT across all files for given compound to reference 'mz_tolerance': 20, # strict = 5, loose = 25; >= ppm of median mz across all files for given compound relative to reference 'min_msms_score': .6, 'allow_no_msms': True, # strict = 0.6, loose = 0.3 <= highest compound dot-product score across all files for given compound relative to reference 'min_num_frag_matches': 1, 'min_relative_frag_intensity': .001} # strict = 3 and 0.1, loose = 1, 0.01 number of matching mzs when calculating max_msms_score and ratio of second highest to first highest intensity of matching sample mzs scores_df = fa.make_scores_df(metatlas_dataset,hits) scores_df['passing'] = fa.test_scores_df(scores_df, **kwargs) pass_atlas_df, fail_atlas_df, pass_dataset, fail_dataset = fa.filter_atlas_and_dataset(scores_df, atlas_df, metatlas_dataset, column='passing') fa.make_stats_table(input_dataset = metatlas_dataset, msms_hits = hits, output_loc = output_dir,min_peak_height=1e5,use_labels=True,min_msms_score=0.01,min_num_frag_matches=1,include_lcmsruns = [],exclude_lcmsruns = ['QC'], polarity=polarity) scores_df.to_csv(os.path.join(output_dir,'stats_tables',polarity+'_compound_scores.csv'))
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
089908ab6c441e2faa1f5b34bf7caa54
Export EIC chromatograms as individual pdfs for each compound There are three options for formatting your EIC output using the "group =" line below: 'page' will print each sample group on a new page of a pdf file 'index' will label each group with a letter None will print all of the groups on one page with very small subplot labels The Y axis scale can be shared across all files using share_y = True or set to the max within each file using share_y = False To use short names for plots, short_names_df should be provided as input. Additionally the header column to be used for short names should be provided as follows (short_names_df=short_names_df, short_names_header='short_samplename'). Header options are sample_treatment, short_filename, short_samplename. These are optional parameters
group = 'index' # 'page' or 'index' or None save = True share_y = True dp.make_chromatograms(input_dataset=metatlas_dataset, include_lcmsruns = [],exclude_lcmsruns = ['InjBl','QC','Blank','blank'], group=group, share_y=share_y, save=save, output_loc=output_dir, short_names_df=short_names_df, short_names_header='short_samplename', polarity=polarity)
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
37b04e5fd9474e85099d33bbf15d47f9
Export MSMS mirror plots as individual pdfs for each compound use_labels = True will use the compound names you provided in your atlas, if you set it to false, the compounds will be named with the first synonym available from pubchem which could be a common name, iupac name, cas number, vendor part number, etc. The include and exclude lists will match partial strings in filenames, do not use wildcards. If short_names_df is provided as input, short_samplename is used for plots.
dp.make_identification_figure_v2(input_dataset = metatlas_dataset, msms_hits=hits, use_labels=True, include_lcmsruns = [],exclude_lcmsruns = ['InjBl','QC','Blank','blank'], output_loc=output_dir, short_names_df=short_names_df, polarity=polarity)
notebooks/reference/Workflow_Notebook_Metatlas_Stable_v0.1.0_20210303.ipynb
metabolite-atlas/metatlas
bsd-3-clause
a6be63f3709506f4087a5923bff51235
Table of Contents 1.- About Optimization 2.- Time Profiling 3.- Memory Profiling 4.- Application: K-means Clustering Algorithm <div id='about' /> 1.- About Optimization "The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming". Donald Knuth. Optimizing code prematurely is generally considered a bad practice. Code optimization should only be conducted when it's really needed. We should know exactly where we need to optimize your code. Typically the majority of the execution time comprises a relatively small part of the code. Optimization should never be done without preliminary profiling. <div id='time' /> 2.- Time Profiling 2.1- Time Benchmarking: timeit The %timeit magic and the %%timeit cell magic allow you to quickly evaluate (benchmark) the time taken by one or several Python statements. Some useful options: %timeit? Options: n: Execute the given statement <N> times in a loop. If this value is not given, a fitting value is chosen. r: Repeat the loop iteration <R> times and take the best result. Default: 3 t: Use time.time to measure the time, which is the default on Unix. This function measures wall time. c: Use time.clock to measure the time, which is the default on Windows and measures wall time. On Unix, resource.getrusage is used instead and returns the CPU user time. p: Use a precision of <P> digits to display the timing result. Default: 3 q: Quiet, do not print result. o: Return a TimeitResult that can be stored in a variable to inspect the result in more details. We are going to estimate the time taken to calculate the sum of the inverse squares of all positive integer numbers up to a given n. Let's first define n:
n = 100000
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
ac6db06de8acc6b55a124d176798d796
Let's time this computation in pure Python (Using list comprehensions)
t1 = %timeit -o -n 100 sum([1. / i**2 for i in range(1, n)])
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
5a3e29763b76a9619932ca0766ff857e
Now, let's use the %%timeit cell magic to time the same computation written on two lines:
%%timeit s=0. for i in range(1, n): s += 1./i**2
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
acebe557de659a3c92ef04800de46dd3
Finally, let's time the NumPy version of this computation:
t2 = %timeit -o -n 100 np.sum(1./np.arange(1., n) ** 2)
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
c8a5ae1d12194417d625df4a2ca40ddf
The object returned by timeit contains information about the time measurements:
print("Type:") print(type(t1)) print("\nTime of all runs:") print(t1.all_runs) print("\nBest measured time:") print(t1.best) print("\nWorst measured time:") print(t1.worst) print("\nCompilation time:") print(t1.compile_time)
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
ce9b1e032bd18a6ed4685685937e54d7
And we can compare the performance improvement with the quotient between the best measured times:
print("Performance improvement:") print(t1.best/t2.best)
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
cfc73ec4a1ced18ffa3d1db15e2192f7
2.2- Function Profiling: cProfile The %timeit magic command is often helpful, yet a bit limited when you need detailed information about what takes most of the execution time in your code. This magic command is meant for benchmarking rather than profiling. Python includes a profiler named cProfile that breaks down the execution time into the contributions of all called functions. IPython provides convenient ways to leverage this tool in an interactive session, through the %prun and %%prun magics. To introduce its usage we will use a known example: Random walks. Let's create a function generating random +1 and -1 values in an array:
def step(*shape): # Create a random n-vector with +1 or -1 values. return 2 * (np.random.random_sample(shape) < .5) - 1
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
0ca1fcfbac82700fecfa5e11b8a719b9
Now, let's write the simulation code in a cell starting with %%prun in order to profile the entire simulation. The various options allow us to save the report in a file and to sort the first 10 results by cumulative time. Python's profiler creates a detailed report of the execution time of our code, function by function. For each function, we get the total number of calls, the total and cumulative times, and their per-call counterparts (division by ncalls). Note that: * The total time represents how long the interpreter stays in a given function, excluding the time spent in calls to subfunctions. * The cumulative time is similar but includes the time spent in calls to subfunctions.
a = np.array([1,2,3,4,5,6]) np.cumsum(a) %%prun -s cumulative -q -l 15 -T prun0 n = 10000 iterations = 500 x = np.cumsum(step(iterations, n), axis=0) bins = np.arange(-30, 30, 1) y = np.vstack([np.histogram(x[i,:], bins)[0] for i in range(iterations)])
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
2938c18c3d2e38d26e7712a511458574
In the example, -s allows us to sort the report by a particular column, -q to suppress the pager output, -l to limit the number of lines displayed or to filter the results by function name, and -T to save the report in a text file. . This database-like object contains all information about the profiling and can be analyzed through Python's pstats module. For more info about arguments run %prun?. The profiling report has been saved in a text file named prun0. Let's display it:
print(open('prun0', 'r').read()) def plot_helper(y, i, n): plt.figure(figsize=(10,7)) plt.plot(np.arange(-30,29), y[i], 'ro-') plt.title("Distribution of {0} simultaneous random walks at iteration {1}".format(n,i)) plt.show() interact(plot_helper, y=fixed(y), i=(0,500), n=fixed(10000))
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
9cf7060b3261743bdcf9f65229766745
2.3- Line Profiling: line_profiler Python's native cProfile module and the corresponding %prun magic break down the execution time of code function by function. Sometimes, we may need an even more fine- grained analysis of code performance with a line-by-line report. To profile code line-by-line, we need an external Python module named line_profiler. To install it run one of these: * conda install line_profiler * pip install line_profiler Once installed import the line_profiler IPython extension module that comes with the package:
%load_ext line_profiler
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
447599bdf518bc6e2eda458f7a2cf045
This IPython extension module provides a %lprun magic command to profile a Python function line-by-line. Note: It works best when the function is defined in a file and not in the interactive namespace or in the notebook. Therefore, here we write our code in a Python script using the %%writefile cell magic:
%%writefile simulation.py import numpy as np def step(*shape): # Create a random n-vector with +1 or -1 values. return (2 * (np.random.random_sample(shape) < .5) - 1) def simulate(iterations, n=10000): s = step(iterations, n) x = np.cumsum(s, axis=0) bins = np.arange(-30, 30, 1) y = np.vstack([np.histogram(x[i,:], bins)[0] for i in range(iterations)]) return y
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
6413a4f0523e4f626b24a09b65342005
Now, let's import this script into the interactive namespace so that we can execute and profile our code. The functions to be profiled need to be explicitly specified in the %lprun magic command. We also save the report in a file, lprof0:
import simulation %lprun -T lprof0 -f simulation.simulate simulation.simulate(500)
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
d67435f59df037f662393bde57fe703e
Let's display the report:
print(open('lprof0', 'r').read())
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
7f297c6552981bc8c2c3e8006dcc352a
To see all the possible arguments run %lprun?. <div id='memory' /> 3.- Memory Profiling The methods described in the previous recipe were about CPU time profiling. However, memory is also a critical factor. Writing memory-optimized code is not trivial and can really make your program faster. This is particularly important when dealing with large NumPy arrays. To profile memory usage we need and external module named memory_profiler. To install run one of these: * conda install memory_profiler * pip install memory_profiler Assuming that the simulation code has been loaded as shown above, we load the memory profiler IPython extension:
%load_ext memory_profiler
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
959b8193575d080a30565bf89a399577
The memory_profiler package checks the memory usage of the interpreter at every line. The increment column allows us to spot those places in the code where large amounts of memory are allocated. Now, let's run the code under the control of the memory profiler:
%mprun -T mprof0 -f simulation.simulate simulation.simulate(1500)
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
042afd0fa8abcde4e0e6a045b731971e
Let's show the results:
print(open('mprof0', 'r').read())
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
08c2c9bbc143c3fba1a9d86ebbdfcccc
The memory_profiler IPython extension also comes with a %memit magic command that lets us benchmark the memory used by a single Python statement. Here is a simple example:
%memit np.random.randn(2000, 10000)
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
ab7c294d6b68121f16da2567fc1aefa9
<div id='Application' /> 4.- Application: K-Means Clustering Algorithm This is an algorithm that find structure over unlabeled data, i.e, it is an unsupervised learning algorithm. It is very simple, and works as follows: initialize $k$ cluster centroids Repeat the following: 2.1.- For each point, compute which centroid is nearest to it. 2.2.- For each centroid, move its location to the mean location of the points assigned to it. Let's first generate a set of random 2D points:
points = np.vstack(((np.random.randn(150, 2) * 0.75 + np.array([1, 0])), (np.random.randn(50, 2) * 0.25 + np.array([-0.5, 0.5])), (np.random.randn(50, 2) * 0.5 + np.array([-0.5, -0.5])))) points.shape plt.figure(figsize=(7,7)) plt.scatter(points[:, 0], points[:, 1]) plt.grid() plt.show() def initialize_centroids(points, k): """returns k centroids from the initial points""" centroids = points.copy() np.random.shuffle(centroids) return centroids[:k]
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
7171a42b770630cbc4c769caf4bf15a5
And lest visualize the choosen (initial) centroids:
centroids = initialize_centroids(points, 3) plt.figure(figsize=(7,7)) plt.scatter(points[:, 0], points[:, 1]) plt.scatter(centroids[:, 0], centroids[:, 1], c='r', s=100) plt.grid() plt.show()
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
2ae0a17a40027e58e3519500e971a70b
The following function computes which is the closest centroid for each point in the dataset
def closest_centroid(points,centroids): """returns an array containing the index to the nearest centroid for each point""" # computation of distance matrix m = points.shape[0] n = centroids.shape[0] D = np.zeros((m,n)) for i in range(m): for j in range(n): D[i,j] = np.sqrt( np.sum( (points[i]-centroids[j])**2 ) ) return np.argmin(D, axis=1) closest = closest_centroid(points,centroids)
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
9e5a390285c047760266de14098d1bb4
And the next function move/update the centroids according to the mean position of the cluster of points
def move_centroids(points, closest, centroids): """returns the new centroids assigned from the points closest to them""" return np.array([points[closest==k].mean(axis=0) for k in range(centroids.shape[0])]) move_centroids(points, closest, centroids) plt.subplot(121) plt.scatter(points[:, 0], points[:, 1]) plt.scatter(centroids[:, 0], centroids[:, 1], c='r', s=100) centroids = move_centroids(points, closest, centroids) plt.subplot(122) plt.scatter(points[:, 0], points[:, 1]) plt.scatter(centroids[:, 0], centroids[:, 1], c='r', s=100) plt.show() def main_loop(points, centroids, n_iter, tol=1e-8): for i in range(n_iter): closest = closest_centroid(points, centroids) _centroids = move_centroids(points, closest, centroids) if np.sum((_centroids-centroids)**2, axis=1).max() < tol: centroids = _centroids break centroids = _centroids return centroids
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
1e7db8fb94d5ef37878aff6589551b4a
Now let's profile the execution of this funcion and its sub-functions calls. We use a set of $10000$ points now:
points = np.vstack(((np.random.randn(5000, 2) * 0.75 + np.array([1, 0])), (np.random.randn(2500, 2) * 0.25 + np.array([-0.5, 0.5])), (np.random.randn(2500, 2) * 0.5 + np.array([-0.5, -0.5])))) %%prun -s cumulative -q -l 15 -T prun1 main_loop(points, centroids, 1000) print(open('prun1', 'r').read())
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
98ef74064238e28573779c09c94ad451
Clearly the problem is the closest_centroid function!. Now that we have isolated the problem, we do a line profile of this single function:
%lprun -T lprof2 -f closest_centroid closest_centroid(points, centroids) print(open('lprof2', 'r').read())
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
97a6d74c5530dbdc04fe104f8a5dd8e9
As you should suspect, the problem is that NumPy arrays are not meant to be iterated by Python, but we have to implement this algorithm in a vectorial way (or make it faster with Numba/Cython). The next is a re-implementation of the algorithm, using native NumPy functions:
def closest_centroid(points, centroids): """returns an array containing the index to the nearest centroid for each point""" px = points[:,0].reshape((-1,1)) py = points[:,1].reshape((-1,1)) Dx = px - centroids[:,0].reshape((1,-1)) Dy = py - centroids[:,1].reshape((1,-1)) # distance matrix D = np.sqrt(Dx**2+Dy**2) return np.argmin(D, axis=1)
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
e3a00843cb7a06d84d88be1cc11c35d4
Let's profile again:
%%prun -s cumulative -q -l 15 -T prun2 main_loop(points, centroids, 1000) print(open('prun2', 'r').read())
06_profiling/06_profiling.ipynb
mavillan/SciProg
gpl-3.0
79a3e7692f653ae81d649c65cf81584a
Init
import glob import pyfasta import numpy as np import pandas as pd from collections import Counter import matplotlib.pyplot as plt import scipy.stats as ss from fitter import Fitter from functools import partial %matplotlib inline %load_ext rpy2.ipython %%R library(dplyr) library(tidyr) library(ggplot2) if not os.path.isdir(workDir): os.makedirs(workDir) if not os.path.isdir(rnammerDir): os.makedirs(rnammerDir)
ipynb/bac_genome/SSU_genes_per_ng_DNA.ipynb
nick-youngblut/SIPSim
mit
18f562baf39f40e68c796216bfe267f4
Size distribution of bacterial genomes
p = os.path.join(genomeDir, '*.fasta') genomeFiles = glob.glob(p) print 'Number of genome files: {}'.format(len(genomeFiles))
ipynb/bac_genome/SSU_genes_per_ng_DNA.ipynb
nick-youngblut/SIPSim
mit
bbb3b7286a94676aa573f1b02ad177e6
Distribution of 16S gene copies per genome
total_seq_len = lambda x: sum([len(y) for y in x.values()]) def total_genome_lens(genome_files): genome_lens = {} for fasta in genome_files: name = os.path.split(fasta)[-1] name = os.path.splitext(name)[0] pyf = pyfasta.Fasta(fasta) genome_lens[name] = [total_seq_len(pyf)] return genome_lens genome_lens = total_genome_lens(genomeFiles) df_genome_len = pd.DataFrame(genome_lens).transpose() df_genome_len fig = plt.figure() ax = fig.add_subplot(111) ax.hist(df.ix[:,0], bins=20)
ipynb/bac_genome/SSU_genes_per_ng_DNA.ipynb
nick-youngblut/SIPSim
mit
f11f2bd619aaeced7b323f92f94d766c
Fitting distribution
fo = Fitter(df_genome_len.ix[:,0]) fo.fit() fo.summary() genome_len_best_fit = fo.fitted_param['rayleigh'] genome_len_best_fit # test of distribution x = ss.rayleigh.rvs(*genome_len_best_fit, size=10000) fig = plt.figure() ax = plt.subplot(111) ax.hist(x, bins=50) fig.show()
ipynb/bac_genome/SSU_genes_per_ng_DNA.ipynb
nick-youngblut/SIPSim
mit
860ac8d504256ebac83ee77801fb6368
Distribution of 16S gene copies per genome rnammer run
%%bash -s "$genomeDir" "$rnammerDir" find $1 -name "*fasta" | \ perl -pe 's/.+\/|\.fasta//g' | \ xargs -n 1 -I % -P 30 bash -c \ "rnammer -S bac -m ssu -gff $2/%_rrn.gff -f $2/%_rrn.fna -xml $2/%_rrn.xml < $1/%.fasta" ## Summarizing the results !cd $rnammerDir; \ egrep -v "^#" *.gff | \ grep "16s_rRNA" | \ perl -pe 's/:/\t/' > ssu_summary.txt inFile = os.path.join(rnammerDir, 'ssu_summary.txt') inFH = open(inFile, 'rb') df_ssu = pd.read_csv(inFH, sep='\t', header=None) df_ssu.head() fig = plt.figure() ax = plt.subplot(111) ax.hist(df_ssu.ix[:,6], bins=50) fig.show() # filtering by gene length of >= 1000 bp df_ssu_f = df_ssu.loc[df[6] >= 1000] df_ssu_f.head() # counting number of 16S genes per genome ssu_count = Counter(df_ssu_f[1]) ssu_max = max(ssu_count.values()) # plotting distribution fig = plt.figure() ax = plt.subplot(111) ax.hist(ssu_count.values(), bins=ssu_max) fig.show()
ipynb/bac_genome/SSU_genes_per_ng_DNA.ipynb
nick-youngblut/SIPSim
mit
6d95faea57ad9034fd7aba987a7ef9e8
Fitting distribution
fo = Fitter(ssu_count.values()) fo.fit() fo.summary() ssu_ray_fit = fo.fitted_param['rayleigh'] ssu_ray_fit # test of distribution x = ss.rayleigh.rvs(*ssu_ray_fit, size=10000) fig = plt.figure() ax = plt.subplot(111) ax.hist(x, bins=50) fig.show() ssu_beta_fit = fo.fitted_param['beta'] ssu_beta_fit # test of distribution x = ss.beta.rvs(*ssu_beta_fit, size=10000) fig = plt.figure() ax = plt.subplot(111) ax.hist(x, bins=50) fig.show()
ipynb/bac_genome/SSU_genes_per_ng_DNA.ipynb
nick-youngblut/SIPSim
mit
3a7f8a58d1e7225f9d1d8b60f2b613cd
Notes Using rayleigh distribution Monte Carlo estimation of 16S gene copies per ng of DNA M.W. of dsDNA = (# nucleotides x 607.4) + 157.9
# example of calculations gradient_DNA_conc = 1e-9 # g of DNA avogadro = 6.022e23 # molecules/mole genome_len = 4000000 mw_genome = genome_len * 607.4 + 157.9 n_genomes = gradient_DNA_conc / mw_genome * avogadro ssu_copy_per_genome = 4 n_genomes * ssu_copy_per_genome def SSU_copies_in_ng_DNA(DNA_conc, genome_len, ssu_copy_per_genome): DNA_conc__g = DNA_conc * 1e-9 # ng --> g of DNA avogadros = 6.022e23 # molecules/mole mw_genome = genome_len * 607.4 + 157.9 n_genomes = DNA_conc__g / mw_genome * avogadros ssu_copies = n_genomes * ssu_copy_per_genome return ssu_copies # run SSU_copies_in_ng_DNA(1, 4000000, 4) def SSU_copies_MC(DNA_conc, genome_len_dist, ssu_copy_dist, n=100000): n_copy_dist = [] for i in range(n): genome_len = genome_len_dist(size=1)[0] ssu_copy_per_genome = ssu_copy_dist(size=1)[0] n_copies = SSU_copies_in_ng_DNA(DNA_conc, genome_len, ssu_copy_per_genome) n_copy_dist.append(n_copies) return n_copy_dist # distribution functions genome_len_dist = partial(ss.rayleigh.rvs, *genome_len_best_fit) ssu_copy_dist = partial(ss.rayleigh.rvs, *ssu_ray_fit) # monte carlo estimation of ssu copies in a gradient gradient_dna_conc__ng = 5000 n_copy_dist = SSU_copies_MC(gradient_dna_conc__ng, genome_len_dist, ssu_copy_dist, n=10000) fig = plt.figure() ax = plt.subplot(111) ax.hist(n_copy_dist, bins=50) fig.show() median_copy = int(np.median(n_copy_dist)) std_copy = int(np.std(n_copy_dist)) print 'Number of SSU copies in {} ng of DNA: {} +/- {}'.format(gradient_dna_conc__ng, median_copy, std_copy) def median_confidence_interval(data, confidence=0.95): a = 1.0*np.array(data) n = len(a) m, se = np.median(a), ss.sem(a) h = se * ss.t._ppf((1+confidence)/2., n-1) return m, m-h, m+h mci = median_confidence_interval(n_copy_dist) mci = map(int, mci) # lci,hci = ss.norm.interval(0.05, loc=np.mean(n_copy_dist), scale=np.std(n_copy_dist)) # copy_median = np.median(n_copy_dist) # mci = [copy_median, copy_median - lci, copy_median + hci] print 'Number of SSU copies in {} ng of DNA: {:,d} (low:{:,d}, high:{:,d})'.format(gradient_dna_conc__ng, *mci)
ipynb/bac_genome/SSU_genes_per_ng_DNA.ipynb
nick-youngblut/SIPSim
mit
1b22d762a6e1a8a13308bfb36110a0f0
Exercise: Now suppose you draw an M&M from bag2 and it's blue. What does that mean? Run the update to see what happens.
# Solution goes here
code/chap02.ipynb
NathanYee/ThinkBayes2
gpl-2.0
9a4a0c75bede2870362559d942fe7f4c
Exercises Exercise: This one is from one of my favorite books, David MacKay's "Information Theory, Inference, and Learning Algorithms": Elvis Presley had a twin brother who died at birth. What is the probability that Elvis was an identical twin?" To answer this one, you need some background information: According to the Wikipedia article on twins: ``Twins are estimated to be approximately 1.9% of the world population, with monozygotic twins making up 0.2% of the total---and 8% of all twins.''
# Solution goes here # Solution goes here
code/chap02.ipynb
NathanYee/ThinkBayes2
gpl-2.0
785e00c158fb3ec2b34b745b2111d419
The sex and race columns contain potentially interesting information on how gun deaths in the US vary by gender and race. Exploring both of these columns can be done with a similar dictionary counting technique to what we did earlier.
sex_counts = {} race_counts = {} for each in data: sex = each[5] if sex in sex_counts: sex_counts[sex] += 1 else: sex_counts[sex] = 1 for each in data: race = each[7] if race in race_counts: race_counts[race] += 1 else: race_counts[race] = 1 print(race_counts) print(sex_counts)
1. Python (Intermediate) Exploring Gun Deaths in the US/Basics.ipynb
Fetisoff/Portfolio
apache-2.0
dcf11632bb46150a55cbc23c49586c2f
However, our analysis only gives us the total number of gun deaths by race in the US. Unless we know the proportion of each race in the US, we won't be able to meaningfully compare those numbers. I want to get is a rate of gun deaths per 100000 people of each race
f = open ('census.csv', 'r') census = list(csv.reader(f)) census mapping = { 'Asian/Pacific Islander': int(census[1][14]) + int(census[1][15]), 'Black': int(census[1][12]), 'Native American/Native Alaskan': int(census[1][13]), 'Hispanic': int(census[1][11]), 'White': int(census[1][10]) } race_per_hundredk = {} for key, value in race_counts.items(): result = race_counts[key] / mapping[key] * 100000 race_per_hundredk[key] = result race_per_hundredk #We can filter our results, and restrict them to the Homicide intent intents = [each[3] for each in data] races = [each[7] for each in data] homicide_race_counts = {} for i, each in enumerate(races): if intents[i] == 'Homicide': if each not in homicide_race_counts: homicide_race_counts[each] = 0 else: homicide_race_counts[each] += 1 homicide_race_counts homicide_race_per_hundredk = {} for key, value in homicide_race_counts.items(): result = homicide_race_counts[key] / mapping[key] * 100000 homicide_race_per_hundredk[key] = result homicide_race_per_hundredk
1. Python (Intermediate) Exploring Gun Deaths in the US/Basics.ipynb
Fetisoff/Portfolio
apache-2.0
149380266ffc89f92cc35b66567c6332
Finding I have founded out, that some racial categories in USA have higher gun-related homicide rate than other races. For example, at least as evidenced by the statics, that people of Black rice commit gun-related homicide 10 times more people of White race or 4 times more people of Hispanic race. Are the any link between month and homicide rate in USA? Let figure out that!
month_homicide_rate = {} months = [int(each[2]) for each in data] for i, each in enumerate(months): if intents[i] == 'Homicide': if each not in month_homicide_rate: month_homicide_rate[each] = 0 else: month_homicide_rate[each] += 1 month_homicide_rate def months_diff(input_dict): max_value = 0 max_key = 0 min_value = input_dict[1] min_key = 0 for key, value in input_dict.items(): if value > max_value: max_value = value max_key = key if value < min_value: min_value = value min_key = key gap = round((max_value / min_value), 2) print ('max month is',max_key,'has',max_value,'and min month is',min_key,'has',min_value,'. The gap between min and max months is',gap,'!') months_diff(month_homicide_rate)
1. Python (Intermediate) Exploring Gun Deaths in the US/Basics.ipynb
Fetisoff/Portfolio
apache-2.0
c18e8d0f6ea4ff53240921d13e45904d
VA Top 15 violations by total revenue (revenue and total)
dc_df = df[(df.rp_plate_state.isin(['VA']))] dc_fines = dc_df.groupby(['violation_code']).fine.sum().reset_index('violation_code') fine_codes_15 = dc_fines.sort_values(by='fine', ascending=False)[:15] top_codes = dc_df[dc_df.violation_code.isin(fine_codes_15.violation_code)] top_violation_by_state = top_codes.groupby(['violation_description']).fine.sum() ax = top_violation_by_state.plot.barh() ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%.0f')) plt.draw() top_violation_by_state = top_codes.groupby(['violation_description']).counter.sum() ax = top_violation_by_state.plot.barh() ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%.0f')) plt.draw()
notebooks/Top 15 Violations by Revenue And Total for VA.ipynb
ndanielsen/dc_parking_violations_data
mit
19ead5d59b3b36917508604f18212e46
VA Top 15 violations by total tickets (revenue and total)
dc_df = df[(df.rp_plate_state.isin(['VA']))] dc_fines = dc_df.groupby(['violation_code']).counter.sum().reset_index('violation_code') fine_codes_15 = dc_fines.sort_values(by='counter', ascending=False)[:15] top_codes = dc_df[dc_df.violation_code.isin(fine_codes_15.violation_code)] top_violation_by_state = top_codes.groupby(['violation_description']).fine.sum() ax = top_violation_by_state.plot.barh() ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%.0f')) plt.draw() top_violation_by_state = top_codes.groupby(['violation_description']).counter.sum() ax = top_violation_by_state.plot.barh() ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%.0f')) plt.draw()
notebooks/Top 15 Violations by Revenue And Total for VA.ipynb
ndanielsen/dc_parking_violations_data
mit
e6817c1df613fc831e128b92224a64ba
Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer. You're free to use any TensorFlow package for all the other layers.
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function print('Conv_ksize: ', conv_ksize, ' Conv_strides: ', conv_strides, ' Conv output depth:', conv_num_outputs, \ x_tensor.get_shape().as_list(), ' Pool ksize: ', pool_ksize, ' Pool strides: ', pool_strides) #Convolution and max pool Parameters input_depth = x_tensor.get_shape().as_list()[3] output_depth = conv_num_outputs weight = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], input_depth, output_depth], mean=0.0, stddev=0.1)) biases = tf.Variable(tf.truncated_normal(output_depth)) strides = [1, conv_strides[0], conv_strides[1], 1] pool_strides = [1, pool_strides[0], pool_strides[1], 1] #Convolution & Max pool conv2d_1 = tf.nn.conv2d(x_tensor, weight, strides, padding='SAME') conv2d_1 = tf.nn.bias_add(conv2d_1, biases) conv2d_1 = tf.nn.relu(conv2d_1) conv2d_1 = tf.nn.max_pool(conv2d_1, [1, pool_ksize[0], pool_ksize[1], 1], pool_strides, padding='SAME') return conv2d_1 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool)
image-classification/.ipynb_checkpoints/dlnd_image_classification-checkpoint.ipynb
elenduuche/deep-learning
mit
b676145868e2ba37c8ecba1f1b0746f7
Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.
def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function weight_rows = x_tensor.get_shape().as_list()[1] weight = tf.Variable(tf.truncated_normal([weight_rows, num_outputs], mean=0.0, stddev=0.1)) biases = tf.Variable(tf.truncated_normal([num_outputs])) fc1 = tf.add(tf.matmul(x_tensor, weight), biases) fc1 = tf.nn.relu(fc1) return fc1 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn)
image-classification/.ipynb_checkpoints/dlnd_image_classification-checkpoint.ipynb
elenduuche/deep-learning
mit
a830f1f51ecdb23787165d0f8e52ba5c
Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer. Note: Activation, softmax, or cross entropy shouldn't be applied to this.
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function weight = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs], mean=0.0, stddev=0.1)) biases = tf.Variable(tf.zeros([num_outputs])) out = tf.add(tf.matmul(x_tensor, weight), biases) return out """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output)
image-classification/.ipynb_checkpoints/dlnd_image_classification-checkpoint.ipynb
elenduuche/deep-learning
mit
f015cf61e811fba9afd4980993d818ac
Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) conv2d_1 = conv2d_maxpool(x, 10, (5, 5), (1, 1), (2, 2), (2, 2)) conv2d_2 = conv2d_maxpool(conv2d_1, 32, (5, 5), (1, 1), (2, 2), (2, 2)) conv2d_3 = conv2d_maxpool(conv2d_2, 64, (5, 5), (1, 1), (2, 2), (2, 2)) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) flattened_tensor = flatten(conv2d_3) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) fc1 = fully_conn(flattened_tensor, 64) fc1 = tf.nn.dropout(fc1, keep_prob) fc2 = fully_conn(fc1, 32) fc2 = tf.nn.dropout(fc2, keep_prob) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) logits = output(fc2, 10) # TODO: return output return logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net)
image-classification/.ipynb_checkpoints/dlnd_image_classification-checkpoint.ipynb
elenduuche/deep-learning
mit
19cc5daa6d910598aa07c452e7a7f834
Reading the data
def loadContributions(file, withsexe=False): contributions = pd.read_json(path_or_buf=file, orient="columns") rows = []; rindex = []; for i in range(0, contributions.shape[0]): row = {}; row['id'] = contributions['id'][i] rindex.append(contributions['id'][i]) if (withsexe): if (contributions['sexe'][i] == 'Homme'): row['sexe'] = 0 else: row['sexe'] = 1 for question in contributions['questions'][i]: if (question.get('Reponse')) and (question['texte'][0:5] != 'Savez') and (question['titreQuestion'][-2:] != '10'): row[question['titreQuestion']+' : '+question['texte']] = 1 for criteres in question.get('Reponse'): # print(criteres['critere'].keys()) row[question['titreQuestion']+'. (Réponse) '+question['texte']+' -> '+str(criteres['critere'].get('texte'))] = 1 rows.append(row) df = pd.DataFrame(data=rows) df.fillna(0, inplace=True) return df df = loadContributions('../data/EGALITE2.brut.json', True) df.fillna(0, inplace=True) df.index = df['id'] #df.to_csv('consultation_an.csv', format='%d') #df.columns = ['Q_' + str(col+1) for col in range(len(df.columns) - 2)] + ['id' , 'sexe'] df.head()
exploitation/analyse_quanti_theme2.ipynb
regardscitoyens/consultation_an
agpl-3.0
9ff73a1d87745c4ad62057b0c20e3fbf
Permutation t-test on source data with spatio-temporal clustering This example tests if the evoked response is significantly different between two conditions across subjects. Here just for demonstration purposes we simulate data from multiple subjects using one subject's data. The multiple comparisons problem is addressed with a cluster-level permutation test across space and time.
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr> # Eric Larson <larson.eric.d@gmail.com> # License: BSD-3-Clause import os.path as op import numpy as np from numpy.random import randn from scipy import stats as stats import mne from mne.epochs import equalize_epoch_counts from mne.stats import (spatio_temporal_cluster_1samp_test, summarize_clusters_stc) from mne.minimum_norm import apply_inverse, read_inverse_operator from mne.datasets import sample print(__doc__)
0.24/_downloads/ca1574468d033ed7a4e04f129164b25b/20_cluster_1samp_spatiotemporal.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
3fb6c486a73649a23c55d819beb41934
Transform to common cortical space Normally you would read in estimates across several subjects and morph them to the same cortical space (e.g. fsaverage). For example purposes, we will simulate this by just having each "subject" have the same response (just noisy in source space) here. <div class="alert alert-info"><h4>Note</h4><p>Note that for 7 subjects with a two-sided statistical test, the minimum significance under a permutation test is only p = 1/(2 ** 6) = 0.015, which is large.</p></div>
n_vertices_sample, n_times = condition1.data.shape n_subjects = 6 print('Simulating data for %d subjects.' % n_subjects) # Let's make sure our results replicate, so set the seed. np.random.seed(0) X = randn(n_vertices_sample, n_times, n_subjects, 2) * 10 X[:, :, :, 0] += condition1.data[:, :, np.newaxis] X[:, :, :, 1] += condition2.data[:, :, np.newaxis]
0.24/_downloads/ca1574468d033ed7a4e04f129164b25b/20_cluster_1samp_spatiotemporal.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
9461b31c9f8ed62b63f7abd428ddb87e
Vorbereitung Auswahl Vorbeifahrt und Abschnitt Auswahl der Vorbeifahrt. Insgesamt haben wir die folgende passby IDs:
print('passby IDs:', list(passby.keys()))
DSP/auswertungLS.ipynb
e-sr/SDWirkungNi
cc0-1.0
21f5b0b0361bb335b2768ed8d86bf680
Auswahl einer Abschnitt mit Lichtschranke: Q1, Q4
E = passby['14']['Q4'] # print('Signal ID(with corresponding .mat file):', E['ID']) LSignals = {'LS':E['signals']['LS']}
DSP/auswertungLS.ipynb
e-sr/SDWirkungNi
cc0-1.0
bdb193923a46ccda939d2c26f4913b4e
Detektion der Durchfahrtszeiten (tPeaks) jedes Drehgestell Wenn die LS vom Rad abgedunket wird entsteht im Signal ein Peak. Damit lassen sich die Durchfahrtszeiten jedes drehgestell abschätzen. Die Funktion detect_weel_times implementiert die Berechnung.
tPeaks = detect_weel_times(LSignals['LS'], decimation = 8 )
DSP/auswertungLS.ipynb
e-sr/SDWirkungNi
cc0-1.0
d33a02247680def4f180536cf0b59de6
das Resultat ist in den nächsten Bild zu sehen
f,ax = plt.subplots() LSignals['LS'].plot(ax=ax) for tp in tPeaks: ax.axvline(tp,color='red',alpha=0.5) ax.set_xbound(tPeaks.min()-0.1, tPeaks.max()+0.1)
DSP/auswertungLS.ipynb
e-sr/SDWirkungNi
cc0-1.0
b93cddd48d86c0d2fa2d9fcefefff318
Mittelere und Änderung der Vorbeifahrtsgeschwindigkeit Die Abschätzung erfolgt in zwei schritte und ist im train_speed funktion implementiert: aus tPeaks lässt sich mithilfe der Abstand der Axen im Drehgestell die Geschwindigkeit jeder Drehgestell abschätzen. Dann kann man mittels eine regression (robuste regression um ausreisser wenig zu gewichten) die mittelere geschwindigkeit und die Änderung der Vorbeifahrtsgeschwindigkeit abgeschätzt werden Eine Abbildung der Resultate is unten zu sehen
_,_,_ = train_speed(tPeaks, axleDistance=2, plot=True)
DSP/auswertungLS.ipynb
e-sr/SDWirkungNi
cc0-1.0
3fca34a6bf2bbb180035f491b1b12a50