markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
hash
stringlengths
32
32
Plot the PM2.5 Concentration, a measure of particulate air polution.
%matplotlib inline sql=""" SELECT county_name, mrfei, pm25_concentration, percent as percent_poverty, geometry FROM geo as counties LEFT JOIN hf_total ON hf_total.gvid = counties.gvid LEFT JOIN aq_total ON aq_total.gvid = counties.gvid LEFT JOIN pr_total ON pr_total.gvid = counties.gvid; """ w.geoframe(sql).plot(column='pm25_concentration')
test/bundle_tests/build.example.com/classification/Using SQL JOINS.ipynb
CivicKnowledge/ambry
bsd-2-clause
07fb5b4797b70415b74a537dc1a2c07d
Create Dotstar object You can pass several arguments to the Dotstar class constructor to change the behavior of the LED class. ds = dotstar.Dotstar(led_count=72, bus=0, init_data=0, init_brightness=0) Parameters: led_count = some_number_of_leds Change the number of LEDs in your strip. Note that this counts the raw number of individual LEDs, not how many strips/devices you have. Make sure this is set so all the LEDs are used. bus = 0 Change the SPI bus. If you do not specify one, it will be initialized on bus 0, which is the default for the Minnowboard. init_data = some_brightness_value + some_hue Change the initial value of the LED strip. By default all the LEDS are initialized to the first color pushed. If you plan on having all the LEDs start off dark, don't set anything here. init_brightness = some_brightness Change the initial brightness of the LEDs. Valid brightness settings range from 0 to 10, representing the intensity of the LEDs from 0% to 100%. If you want the LEDs to start off dark, set this to 0 at the start. Here is a typical initialization, starting all 72 LEDS (or 2 Adafruit Dotstar LED strips connected together) turned off:
ds = dotstar.Dotstar(led_count=72*3,init_brightness=0)
Dotstar-LED.ipynb
MinnowBoard/fishbowl-notebooks
mit
70042b0e1ccd902c5b9a39801a11dff3
Class Methods Now we can make use of the functions in the class to set the colors and intesnity of each LED. The class works by populating a deque with the LED values you want, and then pushing all the data at once to the LED strip. The following methods provide the most basic functionality: Dotstar.set(which_LED, brightness_level, red_hue, blue_hue, green_hue) This function will add the LED to activate to the queue. The brightness and hue options are on a scale of 0 to 256, and the LED selection is from 0 to Dotstar.draw() This funciton draws the created deque to the LED strip. This function will clear the current deque, allowing you to populate another one. Example Run this section to create a sequence of 5 red LEDS that move throughout the length of the LEDs. It looks like the LED array on KITT from Knight Rider.
while True: for current_led in range (4, ds.led_count-4): ds.set(current_led-4, 0, 0, 0, 0) ds.set(current_led-2, 10, 100, 0, 0) ds.set(current_led-1, 50, 200, 0, 0) ds.set(current_led, 50, 250, 0, 0) ds.set(current_led+1, 50, 200, 0, 0) ds.set(current_led+2, 50, 150, 0, 0) ds.set(current_led+4, 0, 0, 0, 0) ds.draw() for current_led in range(ds.led_count-5, 4, -1): ds.set(current_led-3,10,100,0,0) ds.set(current_led-2,10,150,0,0) ds.set(current_led-1,50,200,0,0) ds.set(current_led,50,250,0,0) ds.set(current_led+1,50,200,0,0) ds.set(current_led+2,50,150,0,0) ds.set(current_led+4,0,0,0,0) ds.draw()
Dotstar-LED.ipynb
MinnowBoard/fishbowl-notebooks
mit
baaae5bbe0646e17263510eedd17026e
Next-word prediction task Part 1: Data preparation 1.1. Loading data Load and split the text of our story
def load_data(filename): with open(filename) as f: data = f.readlines() data = [x.strip().lower() for x in data] data = [data[i].split() for i in range(len(data))] data = np.array(data) data = np.reshape(data, [-1, ]) print(data) return data #Run the cell train_file ='data/story.txt' train_data = load_data(train_file) print("Loaded training data...") print(len(train_data))
word_prediction_lstm/TP3-notebook.ipynb
fablln/Deep-Learning
mit
7e429e5a621b5e6c2662dc21c970d02e
1.2.Symbols encoding The LSTM input's can only be numbers. A way to convert words (symbols or any items) to numbers is to assign a unique integer to each word. This process is often based on frequency of occurrence for efficient coding purpose. Here, we define a function to build an indexed word dictionary (word->number). The "build_vocabulary" function builds both: Dictionary : used for encoding words to numbers for the LSTM inputs Reverted dictionnary : used for decoding the outputs of the LSTM into words (and punctuation). For example, in the story above, we have 113 individual words. The "build_vocabulary" function builds a dictionary with the following entries ['the': 0], [',': 1], ['company': 85],...
def build_vocabulary(words): count = collections.Counter(words).most_common() dic= dict() for word, _ in count: dic[word] = len(dic) reverse_dic= dict(zip(dic.values(), dic.keys())) return dic, reverse_dic
word_prediction_lstm/TP3-notebook.ipynb
fablln/Deep-Learning
mit
4c7c3defcf9e88a4f734c3a1cc72777a
Run the cell below to display the vocabulary
dictionary, reverse_dictionary = build_vocabulary(train_data) vocabulary_size= len(dictionary) print "Dictionary size (Vocabulary size) = ", vocabulary_size print("\n") print("Dictionary : \n") print(dictionary) print("\n") print("Reverted Dictionary : \n" ) print(reverse_dictionary)
word_prediction_lstm/TP3-notebook.ipynb
fablln/Deep-Learning
mit
213218aa7e7606d8583381378ec55c4e
Part 2 : LSTM Model in TensorFlow Since you have defined how the data will be modeled, you are now to develop an LSTM model to predict the word of following a sequence of 3 words. 2.1. Model definition Define a 2-layers LSTM model. For this use the following classes from the tensorflow.contrib library: rnn.BasicLSTMCell(number of hidden units) rnn.static_rnn(rnn_cell, data, dtype=tf.float32) rnn.MultiRNNCell(,) You may need some tensorflow functions (https://www.tensorflow.org/api_docs/python/tf/) : - tf.split - tf.reshape - ...
def lstm_model(x, w, b, n_input, n_hidden): # reshape to [1, n_input] x = tf.reshape(x, [-1, n_input]) # Generate a n_input-element sequence of inputs # (eg. [had] [a] [general] -> [20] [6] [33]) x = tf.split(x,n_input,1) # 1-layer LSTM with n_hidden units. rnn_cell = rnn.BasicLSTMCell(n_hidden) #improvement #rnn_cell = rnn.MultiRNNCell([rnn.BasicLSTMCell(n_hidden),rnn.BasicLSTMCell(n_hidden)]) #rnn_cell = rnn.MultiRNNCell([rnn.BasicLSTMCell(n_hidden),rnn.BasicLSTMCell(n_hidden),rnn.BasicLSTMCell(n_hidden)]) # generate prediction outputs, states = rnn.static_rnn(rnn_cell, x, dtype=tf.float32) # there are n_input outputs but # we only want the last output return tf.matmul(outputs[-1], w['out']) + b['out']
word_prediction_lstm/TP3-notebook.ipynb
fablln/Deep-Learning
mit
8bede161be91dd1ca4473e8534bc6c1a
Training Parameters and constants
# Training Parameters learning_rate = 0.001 epochs = 50000 display_step = 1000 n_input = 3 #For each LSTM cell that you initialise, supply a value for the hidden dimension, number of units in LSTM cell n_hidden = 64 # tf Graph input x = tf.placeholder("float", [None, n_input, 1]) y = tf.placeholder("float", [None, vocabulary_size]) # LSTM weights and biases weights = { 'out': tf.Variable(tf.random_normal([n_hidden, vocabulary_size]))} biases = {'out': tf.Variable(tf.random_normal([vocabulary_size])) } #build the model pred = lstm_model(x, weights, biases,n_input,n_hidden)
word_prediction_lstm/TP3-notebook.ipynb
fablln/Deep-Learning
mit
7aa937a6d97d1b079e574ad727f1595e
Define the Loss/Cost and optimizer
# Loss and optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y)) #cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1)) #cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(tf.clip_by_value(pred,-1.0,1.0)), reduction_indices=1)) optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost) # Model evaluation correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
word_prediction_lstm/TP3-notebook.ipynb
fablln/Deep-Learning
mit
8cece8e39da2d12e1852e0cf7a54b59c
Comment: We decided to apply the softmax and calculate the cost at the same time. In this way we can use the method softmax_cross_entropy_with_logits, which is more numerically stable in corner cases than applying the softmax and then calculating the cross entropy We give you here the Test Function
#run the cell def test(sentence, session, verbose=False): sentence = sentence.strip() words = sentence.split(' ') if len(words) != n_input: print("sentence length should be equel to", n_input, "!") try: symbols_inputs = [dictionary[str(words[i - n_input])] for i in range(n_input)] keys = np.reshape(np.array(symbols_inputs), [-1, n_input, 1]) onehot_pred = session.run(pred, feed_dict={x: keys}) onehot_pred_index = int(tf.argmax(onehot_pred, 1).eval()) words.append(reverse_dictionary[onehot_pred_index]) sentence = " ".join(words) if verbose: print(sentence) return reverse_dictionary[onehot_pred_index] except: print " ".join(["Word", words[i - n_input], "not in dictionary"])
word_prediction_lstm/TP3-notebook.ipynb
fablln/Deep-Learning
mit
6218bbe999ad89b5d3ed9f5470c6be76
Part 3 : LSTM Training In the Training process, at each epoch, 3 words are taken from the training data, encoded to integer to form the input vector. The training labels are one-hot vector encoding the word that comes after the 3 inputs words. Display the loss and the training accuracy every 1000 iteration. Save the model at the end of training in the lstm_model folder
# Initializing the variables init = tf.global_variables_initializer() saver = tf.train.Saver() start_time = time() # Launch the graph with tf.Session() as session: session.run(init) step = 0 offset = random.randint(0,n_input+1) end_offset = n_input + 1 acc_total = 0 loss_total = 0 writer.add_graph(session.graph) while step < epochs: # Generate a minibatch. Add some randomness on selection process. if offset > (len(train_data)-end_offset): offset = random.randint(0, n_input+1) symbols_in_keys = [ [dictionary[ str(train_data[i])]] for i in range(offset, offset+n_input) ] symbols_in_keys = np.reshape(np.array(symbols_in_keys), [-1, n_input, 1]) symbols_out_onehot = np.zeros([len(dictionary)], dtype=float) symbols_out_onehot[dictionary[str(train_data[offset+n_input])]] = 1.0 symbols_out_onehot = np.reshape(symbols_out_onehot,[1,-1]) _, acc, loss, onehot_pred = session.run([optimizer, accuracy, cost, pred], \ feed_dict={x: symbols_in_keys, y: symbols_out_onehot}) loss_total += loss acc_total += acc if (step+1) % display_step == 0: print("Iter= " + str(step+1) + ", Average Loss= " + \ "{:.6f}".format(loss_total/display_step) + ", Average Accuracy= " + \ "{:.2f}%".format(100*acc_total/display_step)) acc_total = 0 loss_total = 0 symbols_in = [train_data[i] for i in range(offset, offset + n_input)] symbols_out = train_data[offset + n_input] symbols_out_pred = reverse_dictionary[int(tf.argmax(onehot_pred, 1).eval())] print("%s - [%s] vs [%s]" % (symbols_in,symbols_out,symbols_out_pred)) step += 1 offset += (n_input+1) print("Optimization Finished!") print("Elapsed time: ", time() - start_time) print("Run on command line.") print("\ttensorboard --logdir=%s" % (logs_path)) print("Point your web browser to: http://localhost:6006/") save_path = saver.save(session, "model.ckpt") print("Model saved in file: %s" % save_path)
word_prediction_lstm/TP3-notebook.ipynb
fablln/Deep-Learning
mit
fca95f80698fc2e8f5ddd5cd458d01da
Comment: We created different models with different number of layers, and we have seen that the best accuracy is achieved using only 2 laers. Using more or less layers we achieve a lower accuracy Part 4 : Test your model 3.1. Next word prediction Load your model (using the model_saved variable given in the training session) and test the sentences : - 'get a little' - 'nobody tried to' - Try with other sentences using words from the stroy's vocabulary.
with tf.Session() as sess: # Initialize variables sess.run(init) # Restore model weights from previously saved model saver.restore(sess, "./model.ckpt") print(test('get a little', sess)) print(test('nobody tried to', sess))
word_prediction_lstm/TP3-notebook.ipynb
fablln/Deep-Learning
mit
e1f275887923492d501a7a54c3f48f90
Comment: Here it looks that the RNN is working, in fact it can predict correctly the next word. We should not that in this case is difficult to check if the RNN is actually overfitting the training data. 3.2. More fun with the Fable Writer ! You will use the RNN/LSTM model learned in the previous question to create a new story/fable. For this you will choose 3 words from the dictionary which will start your story and initialize your network. Using those 3 words the RNN will generate the next word or the story. Using the last 3 words (the newly predicted one and the last 2 from the input) you will use the network to predict the 5 word of the story.. and so on until your story is 5 sentence long. Make a point at the end of your story. To implement that, you will use the test function. This is the original fable, we will look at it to note an eventual overfitting It was rather lonely for him all day, so he thought upon a plan by which he could get a little company and some excitement. He rushed down towards the village calling out "Wolf, Wolf," and the villagers came out to meet him, and some of them stopped with him for a considerable time. This pleased the boy so much that a few days afterwards he tried the same trick, and again the villagers came to his help. But shortly after this a Wolf actually did come out from the forest, and began to worry the sheep, and the boy of course cried out "Wolf, Wolf," still louder than before. But this time the villagers, who had been fooled twice before, thought the boy was again deceiving them, and nobody stirred to come to his help. So the Wolf made a good meal off the boy's flock, and when the boy complained, the wise man of the village said: "A liar will not be believed, even when he speaks the truth.
#Your implementation goes here with tf.Session() as sess: # Initialize variables sess.run(init) # Restore model weights from previously saved model saver.restore(sess, "./model.ckpt") #a sentence is concluded when we find a dot. fable = [random.choice(dictionary.keys()) for _ in range(3)] n_sentences = fable.count('.') offset = 0 while n_sentences < 5: next_word = test(' '.join(fable[offset:offset+3]), sess) fable.append(next_word) if next_word == '.': n_sentences += 1 offset+=1 print(' '.join(fable))
word_prediction_lstm/TP3-notebook.ipynb
fablln/Deep-Learning
mit
55ce792a71d56602bccbc6d5524dc253
Comment: This is interesting, we see that the sentences have some sort of sense, but when we reach a point, we see the same sentence repated many times. Thus is probably due to overfitting, we should look more deeply. We see that the repeated sentence is different from the original one, but it is still always the same. We think this is due to the fact that the dot start always the same sentence. Maybe we could create more layers and see what happens.
def load_data(filename): with open(filename) as f: data = f.readlines() data = [x.strip().lower() for x in data] data = [data[i].split() for i in range(len(data))] data = np.array(data) data = np.reshape(data, [-1, ]) return data train_file ='data/story.txt' train_data = load_data(train_file) def build_vocabulary(words): count = collections.Counter(words).most_common() dic= dict() for word, _ in count: dic[word] = len(dic) reverse_dic= dict(zip(dic.values(), dic.keys())) return dic, reverse_dic dictionary, reverse_dictionary = build_vocabulary(train_data) vocabulary_size= len(dictionary) import numpy as np import collections # used to build the dictionary import random import time from time import time import pickle # may be used to save your model import matplotlib.pyplot as plt #Import Tensorflow and rnn import tensorflow as tf from tensorflow.contrib import rnn def create_train_model(n_input = 3, n_layers = 2,verbose = False): tf.reset_default_graph() # Target log path logs_path = 'lstm_words' writer = tf.summary.FileWriter(logs_path) def lstm_model(x, w, b, n_input, n_hidden,n_layers): # reshape to [1, n_input] x = tf.reshape(x, [-1, n_input]) # Generate a n_input-element sequence of inputs # (eg. [had] [a] [general] -> [20] [6] [33]) x = tf.split(x,n_input,1) rnn_layers = [rnn.BasicLSTMCell(n_hidden) for _ in range(n_layers)] rnn_cell = rnn.MultiRNNCell(rnn_layers) # generate prediction outputs, states = rnn.static_rnn(rnn_cell, x, dtype=tf.float32) # there are n_input outputs but # we only want the last output return tf.matmul(outputs[-1], w['out']) + b['out'] # Training Parameters learning_rate = 0.001 epochs = 50000 display_step = 1000 #For each LSTM cell that you initialise, supply a value for the hidden dimension, number of units in LSTM cell n_hidden = 64 # tf Graph input x = tf.placeholder("float", [None, n_input, 1]) y = tf.placeholder("float", [None, vocabulary_size]) # LSTM weights and biases weights = { 'out': tf.Variable(tf.random_normal([n_hidden, vocabulary_size]))} biases = {'out': tf.Variable(tf.random_normal([vocabulary_size])) } #build the model pred = lstm_model(x, weights, biases,n_input,n_hidden,n_layers) # Loss and optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y)) #cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1)) #cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(tf.clip_by_value(pred,-1.0,1.0)), reduction_indices=1)) optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost) # Model evaluation correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # Initializing the variables init = tf.global_variables_initializer() saver = tf.train.Saver() start_time = time() # Launch the graph with tf.Session() as session: session.run(init) step = 0 offset = random.randint(0,n_input+1) end_offset = n_input + 1 acc_total = 0 loss_total = 0 writer.add_graph(session.graph) while step < epochs: # Generate a minibatch. Add some randomness on selection process. if offset > (len(train_data)-end_offset): offset = random.randint(0, n_input+1) symbols_in_keys = [ [dictionary[ str(train_data[i])]] for i in range(offset, offset+n_input) ] symbols_in_keys = np.reshape(np.array(symbols_in_keys), [-1, n_input, 1]) symbols_out_onehot = np.zeros([len(dictionary)], dtype=float) symbols_out_onehot[dictionary[str(train_data[offset+n_input])]] = 1.0 symbols_out_onehot = np.reshape(symbols_out_onehot,[1,-1]) _, acc, loss, onehot_pred = session.run([optimizer, accuracy, cost, pred], \ feed_dict={x: symbols_in_keys, y: symbols_out_onehot}) loss_total += loss acc_total += acc if (step+1) % display_step == 0: if verbose or step+1 == epochs: print("Iter= " + str(step+1) + ", Average Loss= " + \ "{:.6f}".format(loss_total/display_step) + ", Average Accuracy= " + \ "{:.2f}%".format(100*acc_total/display_step)) acc_total = 0 loss_total = 0 symbols_in = [train_data[i] for i in range(offset, offset + n_input)] symbols_out = train_data[offset + n_input] symbols_out_pred = reverse_dictionary[int(tf.argmax(onehot_pred, 1).eval())] if verbose: print("%s - [%s] vs [%s]" % (symbols_in,symbols_out,symbols_out_pred)) step += 1 offset += (n_input+1) print("Optimization Finished!") print("Elapsed time: ", time() - start_time) print("Run on command line.") print("\ttensorboard --logdir=%s" % (logs_path)) print("Point your web browser to: http://localhost:6006/") save_path = saver.save(session, "model.ckpt") print("Model saved in file: %s" % save_path) #run the cell def test(sentence, session, verbose=False): sentence = sentence.strip() words = sentence.split(' ') if len(words) != n_input: print("sentence length should be equel to", n_input, "!") try: symbols_inputs = [dictionary[str(words[i - n_input])] for i in range(n_input)] keys = np.reshape(np.array(symbols_inputs), [-1, n_input, 1]) onehot_pred = session.run(pred, feed_dict={x: keys}) onehot_pred_index = int(tf.argmax(onehot_pred, 1).eval()) words.append(reverse_dictionary[onehot_pred_index]) sentence = " ".join(words) if verbose: print(sentence) return reverse_dictionary[onehot_pred_index] except: print " ".join(["Word", words[i - n_input], "not in dictionary"]) #a sentence is concluded when we find a dot. fable = [random.choice(dictionary.keys()) for _ in range(n_input)] #print(dictionary) #print(fable) n_sentences = fable.count('.') offset = 0 while n_sentences < 5 and len(fable) < 200: next_word = test(' '.join(fable[offset:offset+n_input]), session) fable.append(next_word) if next_word == '.': n_sentences += 1 offset+=1 print(' '.join(fable))
word_prediction_lstm/TP3-notebook.ipynb
fablln/Deep-Learning
mit
11b6b73328ec896b732dc9fd98710adf
3.3. Play with number of inputs The number of input in our example is 3, see what happens when you use other number (1 and 5) n_input = 1
create_train_model(n_input = 1, n_layers = 1) create_train_model(n_input = 1, n_layers = 2) create_train_model(n_input = 1, n_layers = 3)
word_prediction_lstm/TP3-notebook.ipynb
fablln/Deep-Learning
mit
7129476122dc268590c1cf84fbb384a4
Comment: Here we see that when the input size is 1 we obtain a vad model regardless of the number of layers, this is because we are basically predicting a word based on the preceding word. This not enough to create a sentence with some sort of sense.Looking ath the prediction accuracy, it is very low. n_input = 3
create_train_model(n_input = 3, n_layers = 1) create_train_model(n_input = 3, n_layers = 2) create_train_model(n_input = 3, n_layers = 3)
word_prediction_lstm/TP3-notebook.ipynb
fablln/Deep-Learning
mit
a6358def6168584636ec37ca97f8a6ca
Comment: Here we see some sentences that have a sense, but we see a tendency to repeat the sentence of the training fable. This is interesting, because during the training the single triples where chosen randomly and not sequentially. Somehow, the net learned the training fable. n_input = 5
create_train_model(n_input = 5, n_layers = 1) create_train_model(n_input = 5, n_layers = 2) create_train_model(n_input = 5, n_layers = 3)
word_prediction_lstm/TP3-notebook.ipynb
fablln/Deep-Learning
mit
5aadff3e6a4f7deaf7896fba7167b161
Measures of central tendency identify values that lie on the center of a sample and help statisticians summarize their data. The most measures of central tendency are mean, median, and mode. Although you should be familiar with these values, they are defined as: MEAN = sum(sample) / len(sample) MEDIAN = sorted(sample)[len(sample)/2] MODE: element(s) with highest frequency
mean_cpm1 = sum(cpm)/len(cpm) print('mean CPM from its definition is: %s' %mean_cpm1) mean_cpm2 = np.mean(cpm) print('mean CPM from built-in function is: %s' %mean_cpm2) if len(cpm)%2 == 0: median_cpm1 = sorted(cpm)[int(len(cpm)/2)] else: median_cpm1 = (sorted(cpm)[int((len(cpm)+1)/2)]+sorted(cpm)[int((len(cpm)-1)/2)]) / 2 print('median CPM from its definition is: %s' %median_cpm1) median_cpm2 = np.median(cpm) print('median CPM from built-in function is: %s' %median_cpm2) from collections import Counter counter = Counter(cpm) _,val = counter.most_common(1)[0] mode_cpm1 = [i for i, target in counter.items() if target == val] print('mode(s) CPM from its definition is: %s' %mode_cpm1) import statistics # note: this function fails if there are two statistical modes mode_cpm2 = statistics.mode(cpm) print('mode(s) CPM from built-in function is: %s' %mode_cpm2) fig, ax = plt.subplots() ax.plot(timedata,cpm,alpha=0.3) # alpha modifier adds transparency, I add this so the CPM plot doesn't overpower the mean, median, and mode ax.plot([timedata[0],timedata[-1]], [mean_cpm1,mean_cpm1], label='mean CPM') ax.plot([timedata[0],timedata[-1]], [median_cpm1,median_cpm1], 'r:', label='median CPM') ax.plot([timedata[0],timedata[-1]], [mode_cpm1,mode_cpm1], 'c--', label='mode CPM',alpha=0.5) plt.legend(loc='best') plt.ylim(ymax = 5, ymin = .5) ax.xaxis.set_major_locator(mdates.MonthLocator()) ax.xaxis.set_major_formatter(mdates.DateFormatter('%b-%Y')) ax.xaxis.set_minor_locator(mdates.DayLocator()) plt.xticks(rotation=15) plt.title('DoseNet Data: Etcheverry Roof\nCPM vs. Time with mean, mode, and median') plt.ylabel('CPM') plt.xlabel('Date') fig, ax = plt.subplots() y,x, _ = plt.hist(cpm,bins=30, alpha=0.3, label='CPM distribution') ax.plot([mean_cpm1,mean_cpm1], [0,y.max()],label='mean CPM') ax.plot([median_cpm1, median_cpm1], [0,y.max()], 'r:', label='median CPM') ax.plot([mode_cpm1,mode_cpm1], [0,y.max()], 'c--', label='mode CPM') plt.legend(loc='best') plt.title('DoseNet Data: Etcheverry Roof\nCPM Histogram with mean, mode, and median') plt.ylabel('Frequency') plt.xlabel('CPM')
Programming Lesson Modules/Module 8- Measures of Central Tendency.ipynb
bearing/dosenet-analysis
mit
7f8f74fb609c7115170cc3cd45d6f551
Define An Address The following address is of a Walgreens for an example.
address = RefuseQueryAddress( house_number=2727, direction='S', street_name='27th', street_type='st')
notebooks/SimpleQuery.ipynb
tomislacker/python-mke-trash-pickup
unlicense
5108a0ad32b388bc5ff27dd8dfba63aa
Execute The Query Call the RefuseQuery class to fetch, parse, and return the status of future refuse pickups.
pickup = RefuseQuery.Execute(address)
notebooks/SimpleQuery.ipynb
tomislacker/python-mke-trash-pickup
unlicense
fbaf812d9908387c8729894979c4b610
Assess Results Look at the response object to determine what route the address is on, when the next garbage pickup is, and when the next recycle pickup will likely be.
print(repr(pickup))
notebooks/SimpleQuery.ipynb
tomislacker/python-mke-trash-pickup
unlicense
a2f78eb110e3bada9316726952741ac3
Example 1: interactplot
# Generate a random dataset with strong simple effects and an interaction n = 80 rs = np.random.RandomState(11) x1 = rs.randn(n) x2 = x1 / 5 + rs.randn(n) b0, b1, b2, b3 = .5, .25, -1, 2 y = b0 + b1 * x1 + b2 * x2 + b3 * x1 * x2 + rs.randn(n) df = pd.DataFrame(np.c_[x1, x2, y], columns=["x1", "x2", "y"]) # Show a scatterplot of the predictors with the estimated model surface sns.interactplot("x1", "x2", "y", df);
vmfiles/IPNB/Examples/b Graphics/30 Seaborn.ipynb
studentofdata/qcew
bsd-3-clause
fadb0db74a6c223dbff8e61b190afe29
Example 2: Correlation matrix heatmap
sns.set(context="paper", font="monospace") # Load the datset of correlations between cortical brain networks df = sns.load_dataset("brain_networks", header=[0, 1, 2], index_col=0) corrmat = df.corr() # Set up the matplotlib figure f, ax = plt.subplots( figsize=(12, 9) ) # Draw the heatmap using seaborn sns.heatmap(corrmat, vmax=.8, square=True) # Use matplotlib directly to emphasize known networks networks = corrmat.columns.get_level_values("network") for i, network in enumerate(networks): if i and network != networks[i - 1]: ax.axhline(len(networks) - i, c="w") ax.axvline(i, c="w") f.tight_layout()
vmfiles/IPNB/Examples/b Graphics/30 Seaborn.ipynb
studentofdata/qcew
bsd-3-clause
3078abc360c19906850930b95719f73f
Example 3: Linear regression with marginal distributions
sns.set(style="darkgrid", color_codes=True) tips = sns.load_dataset("tips") g = sns.jointplot("total_bill", "tip", data=tips, kind="reg", xlim=(0, 60), ylim=(0, 12), color="r", size=7)
vmfiles/IPNB/Examples/b Graphics/30 Seaborn.ipynb
studentofdata/qcew
bsd-3-clause
323870dac8a7f3511b199e26eaf5a9cb
Interactivity We repeat the above example, but now using mpld3 to provide pan & zoom interactivity. Note that this may not work if graphics have already been initialized
# Seaborn + interactivity through mpld3 import mpld3 sns.set( style="darkgrid", color_codes=True ) tips = sns.load_dataset("tips") sns.jointplot( "total_bill", "tip", data=tips, kind="reg", xlim=(0, 60), ylim=(0, 12), color="r", size=7 ) mpld3.display()
vmfiles/IPNB/Examples/b Graphics/30 Seaborn.ipynb
studentofdata/qcew
bsd-3-clause
1fcc5adf54373ee5434b851166e1e492
Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 used min-max normalization : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ max_value = 255 min_value = 0 return (x - min_value) / (max_value - min_value) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize)
image-classification/dlnd_image_classification.ipynb
postBG/DL_project
mit
94a01c616bb5f78df3b4a56b0d1b5aed
One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel.
from sklearn import preprocessing lb=preprocessing.LabelBinarizer() lb.fit(range(10)) def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ return lb.transform(x) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode)
image-classification/dlnd_image_classification.ipynb
postBG/DL_project
mit
cce237de070bfa302f02365f0b2a1584
Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size.
import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a bach of image input : image_shape: Shape of the images : return: Tensor for image input. """ shape = [x for x in image_shape] shape.insert(0, None) return tf.placeholder(tf.float32, shape=shape, name="x") def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ return tf.placeholder(tf.float32, shape=[None, n_classes], name="y") def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ return tf.placeholder(tf.float32, name='keep_prob') """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
image-classification/dlnd_image_classification.ipynb
postBG/DL_project
mit
8355faf284f01147472a540523a4f224
Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ x_tensor_shape = x_tensor.get_shape().as_list() weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor_shape[-1], conv_num_outputs], stddev=0.05)) bias = tf.Variable(tf.truncated_normal([conv_num_outputs], stddev=0.05)) conv_layer = tf.nn.conv2d(x_tensor, weights, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME') conv_layer = tf.nn.bias_add(conv_layer, bias=bias) conv_layer = tf.nn.relu(conv_layer) conv_layer = tf.nn.max_pool(conv_layer, ksize=[1, pool_ksize[0], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME') return conv_layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool)
image-classification/dlnd_image_classification.ipynb
postBG/DL_project
mit
7eabb13ec564f133cc3112c6da302ccd
Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ x_shape = x_tensor.get_shape().as_list() weights = tf.Variable(tf.truncated_normal([x_shape[1], num_outputs], stddev=0.05)) bias = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.05)) return tf.nn.relu(tf.add(tf.matmul(x_tensor, weights), bias)) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn)
image-classification/dlnd_image_classification.ipynb
postBG/DL_project
mit
0922ed9fafae73ae2476b3cb8056e59b
Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this.
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ x_shape = x_tensor.get_shape().as_list() weights = tf.Variable(tf.truncated_normal([x_shape[1], num_outputs], stddev=0.05)) bias = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.05)) return tf.add(tf.matmul(x_tensor, weights), bias) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output)
image-classification/dlnd_image_classification.ipynb
postBG/DL_project
mit
c039d75cdfdcdc06a54c451b33342d60
Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ conv_output_depth = { 'layer1': 32, 'layer2': 64, 'layer3': 128 } conv_ksize = (3, 3) conv_strides = (1, 1) pool_ksize = (2, 2) pool_strides = (2, 2) # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) conv_layer1 = conv2d_maxpool(x, conv_output_depth['layer1'], conv_ksize, conv_strides, pool_ksize, pool_strides) conv_layer2 = conv2d_maxpool(conv_layer1, conv_output_depth['layer2'], conv_ksize, conv_strides, pool_ksize, pool_strides) conv_layer3 = conv2d_maxpool(conv_layer2, conv_output_depth['layer3'], conv_ksize, conv_strides, pool_ksize, pool_strides) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) flattened_layer = flatten(conv_layer3) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) fc_layer1 = fully_conn(flattened_layer, num_outputs=512) fc_layer1 = tf.nn.dropout(fc_layer1, keep_prob=keep_prob) fc_layer2 = fully_conn(fc_layer1, num_outputs=256) fc_layer2 = tf.nn.dropout(fc_layer2, keep_prob=keep_prob) fc_layer3 = fully_conn(fc_layer2, num_outputs=128) fc_layer3 = tf.nn.dropout(fc_layer3, keep_prob=keep_prob) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) logits = output(fc_layer3, 10) # TODO: return output return logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net)
image-classification/dlnd_image_classification.ipynb
postBG/DL_project
mit
4e0d050b66882ccc6f38534041ed2d62
Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0}) valid_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0}) print('Traning Loss: {:>10.4f} Accuracy: {:.6f}'.format(loss, valid_accuracy))
image-classification/dlnd_image_classification.ipynb
postBG/DL_project
mit
097db5607f2c26158af309f3fbe73a3b
Generate a circuit that computes this function. To implement the logical operations we use standard verilog gates, which are available in mantle.verilog.gates.
import magma as m import mantle class VerilatorExample(m.Circuit): io = m.IO(a=m.In(m.Bit), b=m.In(m.Bit), c=m.In(m.Bit), d=m.Out(m.Bit)) io.d <= f(io.a, io.b, io.c) m.compile("build/VerilatorExample", VerilatorExample, "coreir-verilog", inline=True) %cat build/VerilatorExample.v
notebooks/advanced/verilator.ipynb
phanrahan/magmathon
mit
cd40acd757bc929f982dc572dbc22116
Next, generate a verilator test harness in C++ for the circuit. The test vectors are generated using the python function f. The verilator test bench compares the output of the simulator to those test vectors.
from itertools import product from fault import Tester tester = Tester(VerilatorExample) for a, b, c in product([0, 1], [0, 1], [0, 1]): tester.poke(VerilatorExample.a, a) tester.poke(VerilatorExample.b, b) tester.poke(VerilatorExample.c, c) tester.eval() tester.expect(VerilatorExample.d, f(a, b, c)) tester.print("done!!") tester.compile_and_run("verilator", directory="build") %cat build/VerilatorExample_driver.cpp
notebooks/advanced/verilator.ipynb
phanrahan/magmathon
mit
a7ee5908da981329c00a9c756321d7c0
Using fault, we can use the same tester and (with the same testbench inputs/expectations) and use a different backend, like the python simulator.
tester.compile_and_run("python")
notebooks/advanced/verilator.ipynb
phanrahan/magmathon
mit
462750b6b4b4e6bbba49015b542975ca
1. Input Parameters The inputs that need to be provided to activate the radioactive option are: the list of selected radioactive isotopes, the radioactive yield tables. The list of isotopes is declared in the yield_tables/decay_info.txt file and can be modified prior any simulation. The radioactive yields are found (or need to be added) in the yield_tables/ folder. Each stable yield table can have their associated radioactive yield table: Massive and AGB stars Stable isotopes: table Radioactive isotopes: table_radio Type Ia supernovae Stable isotopes: sn1a_table Radioactive isotopes: sn1a_table_radio Neutron star mergers Stable isotopes: nsmerger_table Radioactive isotopes: nsmerger_table_radio Etc.. Each enrichment source can be activated independently by providing its input radioactive yield table. The radioactive yield table file format needs to be identical to its stable counterpart. Warning: Radioactive isotopes will decay into stable isotopes. When using radioactive yields, please make sure that the stable yields do not include the decayed isotopes already. 2. Single Decay Channel (Default Option) If the radioactive isotopes you selected have only one decay channel, you can use the default decay option, which uses the following exponential law, $N_r(t)=N_r(t_0)\,\mathrm{exp}\left[\frac{-(t-t_0)}{\tau}\right],$ $\tau=\frac{T_{1/2}}{\mathrm{ln}(2)},$ where $t_0$ is the reference time where the number of radioactive isotopes was equal to $N_0$. $T_{1/2}$ is the half-life of the isotope, which needs to be specified in yield_tables/decay_info.txt. The decayed product will be added to the corresponding stable isotope, as defined in yield_tables/decay_info.txt. Example with Al-26 Below, a SYGMA simulation is ran with no star formation to better isolate the decay process. Here we choose Al-26 as an example, which decays into Mg-26.
# Number of timesteps in the simulaton. # See https://github.com/NuGrid/NuPyCEE/blob/master/DOC/Capabilities/Timesteps_size_management.ipynb special_timesteps = -1 nb_dt = 100 tend = 2.0e6 dt = tend / float(nb_dt) # No star formation. no_sf = True # Dummy neutron star merger yields to activate the radioactive option. nsmerger_table_radio = 'yield_tables/extra_table_radio_dummy.txt' # Add 1 Msun of radioactive Al-26 in the gas. # The indexes of this array reflect the order seen in the yield_tables/decay_file.txt file # Index 0, 1, 2 --> Al-26, K-40, U-238 ism_ini_radio = [1.0, 0.0, 0.0]
DOC/Capabilities/Including_radioactive_isotopes.ipynb
NuGrid/NuPyCEE
bsd-3-clause
d2e476df2843e3caaacfcd54e1d2ea55
Run SYGMA
# Run SYGMA (or in this case, the decay process) s = sygma.sygma(iniZ=0.02, no_sf=no_sf, ism_ini_radio=ism_ini_radio,\ special_timesteps=special_timesteps, tend=tend, dt=dt,\ decay_file='yield_tables/decay_file.txt', nsmerger_table_radio=nsmerger_table_radio) # Get the Al-26 (radioactive) and Mg-26 (stable) indexes in the gas arrays i_Al_26 = s.radio_iso.index('Al-26') i_Mg_26 = s.history.isotopes.index('Mg-26') # Extract the evolution of these isotopes as a function of time Al_26 = np.zeros(s.nb_timesteps+1) Mg_26 = np.zeros(s.nb_timesteps+1) for i_t in range(s.nb_timesteps+1): Al_26[i_t] = s.ymgal_radio[i_t][i_Al_26] Mg_26[i_t] = s.ymgal[i_t][i_Mg_26]
DOC/Capabilities/Including_radioactive_isotopes.ipynb
NuGrid/NuPyCEE
bsd-3-clause
c510d35b75a16f684f8afb74f1e9cfd9
Plot results
# Plot the evolution of Al-26 and Mg-26 %matplotlib nbagg plt.figure(figsize=(8,4.5)) plt.plot( np.array(s.history.age)/1e6, Al_26, '--b', label='Al-26' ) plt.plot( np.array(s.history.age)/1e6, Mg_26, '-r', label='Mg-26' ) plt.plot([0,2.0], [0.5,0.5], ':k') plt.plot([0.717,0.717], [0,1], ':k') # Labels and fontsizes plt.xlabel('Time [Myr]', fontsize=16) plt.ylabel('Mass of isotope [M$_\odot$]', fontsize=16) plt.legend(fontsize=14, loc='center left', bbox_to_anchor=(1, 0.5)) plt.subplots_adjust(top=0.96) plt.subplots_adjust(bottom=0.15) plt.subplots_adjust(right=0.75) matplotlib.rcParams.update({'font.size': 14.0})
DOC/Capabilities/Including_radioactive_isotopes.ipynb
NuGrid/NuPyCEE
bsd-3-clause
21fbe078e5b8d9052399311e76099647
3. Multiple Decay Channels If the radioactive isotopes you selected have more than one decay channel, you need to use the provided decay module. This option can be activated by adding use_decay_module=True in the list of parameters when creating an instance of SYGMA and OMEGA. When using the decay module, the yield_tables/decay_file.txt file still needs to be provided as an input to define which radioactive isotopes are selected for the calculation. Example with K-40 Below we still run a SYGMA simulation with no star formation to better isolate the decay process. A fraction of K-40 decays into Ca-40, and another fraction decays into Ar-40. Run SYGMA
# Add 1 Msun of radioactive K-40 in the gas. # The indexes of this array reflect the order seen in the yield_tables/decay_file.txt file # Index 0, 1, 2 --> Al-26, K-40, U-238 ism_ini_radio = [0.0, 1.0, 0.0] # Number of timesteps in the simulaton. # See https://github.com/NuGrid/NuPyCEE/blob/master/DOC/Capabilities/Timesteps_size_management.ipynb special_timesteps = -1 nb_dt = 100 tend = 5.0e9 dt = tend / float(nb_dt) # Run SYGMA (or in this case, the decay process) # with the decay module s = sygma.sygma(iniZ=0.0, sfr=sfr, starbursts=starbursts, ism_ini_radio=ism_ini_radio,\ special_timesteps=special_timesteps, tend=tend, dt=dt,\ decay_file='yield_tables/decay_file.txt', nsmerger_table_radio=nsmerger_table_radio,\ use_decay_module=True, radio_refinement=1) # Get the K-40 (radioactive) and Ca-40 and Ar-40 (stable) indexes in the gas arrays i_K_40 = s.radio_iso.index('K-40') i_Ca_40 = s.history.isotopes.index('Ca-40') i_Ar_40 = s.history.isotopes.index('Ar-40') # Extract the evolution of these isotopes as a function of time K_40 = np.zeros(s.nb_timesteps+1) Ca_40 = np.zeros(s.nb_timesteps+1) Ar_40 = np.zeros(s.nb_timesteps+1) for i_t in range(s.nb_timesteps+1): K_40[i_t] = s.ymgal_radio[i_t][i_K_40] Ca_40[i_t] = s.ymgal[i_t][i_Ca_40] Ar_40[i_t] = s.ymgal[i_t][i_Ar_40] # Plot the evolution of Al-26 and Mg-26 %matplotlib nbagg plt.figure(figsize=(8,4.5)) plt.plot( np.array(s.history.age)/1e6, K_40, '--b', label='K-40' ) plt.plot( np.array(s.history.age)/1e6, Ca_40, '-r', label='Ca-40' ) plt.plot( np.array(s.history.age)/1e6, Ar_40, '-g', label='Ar-40' ) # Labels and fontsizes plt.xlabel('Time [Myr]', fontsize=16) plt.ylabel('Mass of isotope [M$_\odot$]', fontsize=16) plt.legend(fontsize=14, loc='center left', bbox_to_anchor=(1, 0.5)) plt.subplots_adjust(top=0.96) plt.subplots_adjust(bottom=0.15) plt.subplots_adjust(right=0.75) matplotlib.rcParams.update({'font.size': 14.0})
DOC/Capabilities/Including_radioactive_isotopes.ipynb
NuGrid/NuPyCEE
bsd-3-clause
088a5a0f601c3c3fd7544e21ba199899
Example with U-238
# Add 1 Msun of radioactive U-238 in the gas. # The indexes of this array reflect the order seen in the yield_tables/decay_file.txt file # Index 0, 1, 2 --> Al-26, K-40, U-238 ism_ini_radio = [0.0, 0.0, 1.0] # Number of timesteps in the simulaton. # See https://github.com/NuGrid/NuPyCEE/blob/master/DOC/Capabilities/Timesteps_size_management.ipynb special_timesteps = -1 nb_dt = 100 tend = 5.0e9 dt = tend / float(nb_dt) # Run SYGMA (or in this case, the decay process) # with the decay module s = sygma.sygma(iniZ=0.0, sfr=sfr, starbursts=starbursts, ism_ini_radio=ism_ini_radio,\ special_timesteps=special_timesteps, tend=tend, dt=dt,\ decay_file='yield_tables/decay_file.txt', nsmerger_table_radio=nsmerger_table_radio,\ use_decay_module=True, radio_refinement=1)
DOC/Capabilities/Including_radioactive_isotopes.ipynb
NuGrid/NuPyCEE
bsd-3-clause
495370cb15b62c44306217bb9a951965
In the case of U-238, there are many isotopes that are resulting from the multiple decay channels. Those new radioactive isotopes are added automatically in the list of isotopes in NuPyCEE.
print(s.radio_iso)
DOC/Capabilities/Including_radioactive_isotopes.ipynb
NuGrid/NuPyCEE
bsd-3-clause
95476209c292a01d04ba35021b3aad9f
Basics of MapReduce
from IPython.display import HTML HTML('<iframe width="798" height="449" src="https://www.youtube.com/embed/gI4HN0JhPmo" frameborder="0" allowfullscreen></iframe>')
1-uIDS-courseNotes/l5-MapReduce.ipynb
tanle8/Data-Science
mit
4fc42f578105f627f09095b3ddf3d5c8
Quiz: Couting Words Serially ```Python import logging import sys import string from util import logfile logging.basicConfig(filename=logfile, format='%(message)s', level=logging.INFO, filemode='w') def word_count(): # For this exercise, write a program that serially counts the number of occurrences # of each word in the book Alice in Wonderland. # # The text of Alice in Wonderland will be fed into your program line-by-line. # Your program needs to take each line and do the following: # 1) Tokenize the line into string tokens by whitespace # Example: "Hello, World!" should be converted into "Hello," and "World!" # (This part has been done for you.) # # 2) Remove all punctuation # Example: "Hello," and "World!" should be converted into "Hello" and "World" # # 3) Make all letters lowercase # Example: "Hello" and "World" should be converted to "hello" and "world" # # Store the the number of times that a word appears in Alice in Wonderland # in the word_counts dictionary, and then print (don't return) that dictionary # # In this exercise, print statements will be considered your final output. Because # of this, printing a debug statement will cause the grader to break. Instead, # you can use the logging module which we've configured for you. # # For example: # logging.info("My debugging message") # # The logging module can be used to give you more control over your # debugging or other messages than you can get by printing them. Messages # logged via the logger we configured will be saved to a # file. If you click "Test Run", then you will see the contents of that file # once your program has finished running. # # The logging module also has other capabilities; see # https://docs.python.org/2/library/logging.html # for more information. # Create an empty dictionary to store word/frequency pair as key/value word_counts = {} for line in sys.stdin: # 2) Remove all punctuation # Example: "Hello," and "World!" should be converted into "Hello" and "World" # 3) Make all letters lowercase # Example: "Hello" and "World" should be converted to "hello" and "world" data = line.strip().split(" ") # Your code here # With each word in the list, remove any punctuation and turn it into lowercase. # Check if the word appears or not, if yes, +1 to key value otherwise assigns its # value to 1. for i in data: key = i.translate(string.maketrans("",""), string.punctuation).lower() if key in word_counts.keys(): word_counts[key] += 1 else: word_counts[key] = 1 print word_counts word_count() ``` Counting Words in MapReduce
from IPython.display import HTML HTML('<iframe width="798" height="449" src="https://www.youtube.com/embed/onseMon9zqA" frameborder="0" allowfullscreen></iframe>') from IPython.display import HTML HTML('<iframe width="798" height="449" src="https://www.youtube.com/embed/_q6098sNqpo" frameborder="0" allowfullscreen></iframe>')
1-uIDS-courseNotes/l5-MapReduce.ipynb
tanle8/Data-Science
mit
1b760674fd31730a293abf9ed5141b91
Mapper
from IPython.display import HTML HTML('<iframe width="798" height="449" src="https://www.youtube.com/embed/mPYxFC7DI28" frameborder="0" allowfullscreen></iframe>')
1-uIDS-courseNotes/l5-MapReduce.ipynb
tanle8/Data-Science
mit
2787fd0c5c514ef47a4579eb02a51973
Reducer
from IPython.display import HTML HTML('<iframe width="798" height="449" src="https://www.youtube.com/embed/bkhuEG0D2HM" frameborder="0" allowfullscreen></iframe>')
1-uIDS-courseNotes/l5-MapReduce.ipynb
tanle8/Data-Science
mit
bf095c953230fe0e20013a107a725346
Quiz: Mapper And Reducer With Aadhaar Data aadhaar_genereated_mapper.py ```Python import sys import string import logging from util import mapper_logfile logging.basicConfig(filename=mapper_logfile, format='%(message)s', level=logging.INFO, filemode='w') def mapper(): #Also make sure to fill out the reducer code before clicking "Test Run" or "Submit". #Each line will be a comma-separated list of values. The #header row WILL be included. Tokenize each row using the #commas, and emit (i.e. print) a key-value pair containing the #district (not state) and Aadhaar generated, separated by a tab. #Skip rows without the correct number of tokens and also skip #the header row. #You can see a copy of the the input Aadhaar data #in the link below: #https://www.dropbox.com/s/vn8t4uulbsfmalo/aadhaar_data.csv #Since you are printing the output of your program, printing a debug #statement will interfere with the operation of the grader. Instead, #use the logging module, which we've configured to log to a file printed #when you click "Test Run". For example: #logging.info("My debugging message") # #Note that, unlike print, logging.info will take only a single argument. #So logging.info("my message") will work, but logging.info("my","message") will not. for line in sys.stdin: #your code here # tokenize the line of data data = line.strip().split(",") if len(data) != 12 or data[0] == 'Registrar': continue print "{0}\t{1}".format(data[3],data[8]) mapper() ``` aadhaar_genereated_reducer.py ```Python import sys import logging from util import reducer_logfile logging.basicConfig(filename=reducer_logfile, format='%(message)s', level=logging.INFO, filemode='w') def reducer(): #Also make sure to fill out the mapper code before clicking "Test Run" or "Submit". #Each line will be a key-value pair separated by a tab character. #Print out each key once, along with the total number of Aadhaar #generated, separated by a tab. Make sure each key-value pair is #formatted correctly! Here's a sample final key-value pair: 'Gujarat\t5.0' #Since you are printing the output of your program, printing a debug #statement will interfere with the operation of the grader. Instead, #use the logging module, which we've configured to log to a file printed #when you click "Test Run". For example: #logging.info("My debugging message") #Note that, unlike print, logging.info will take only a single argument. #So logging.info("my message") will work, but logging.info("my","message") will not. # Initialize values aadhaar_generated = 0 old_key = None for line in sys.stdin: # your code here data = line.strip().split("\t") if len(data) != 2: continue this_key, count = data if old_key and old_key != this_key: print "{0}\t{1}".format(old_key, aadhaar_generated) aadhaar_generated = 0 old_key = this_key aadhaar_generated += float(count) if old_key != None: print "{0}\t{1}".format(old_key, aadhaar_generated) reducer() ``` MapReduce Ecosystem MapReduce programming model Hadoop is a very common open source implementation of MapReduce. Hadoop couples the map reduce programming model with a distributed file system. In order to more easily allow programmers to complete complicated tasks using the processing power of Hadoop, there are many infrastructures out there that either built on top of Hadoop or allow data access via Hadoop. Two of the most common are Hive and Pig. But there are bunch of them out there, for example: Mahout for machine learning Hive was initially developed by Facebook, and one of its biggest selling points is that it allows running map-preoduced jobs through a SQL-like querying language, called the Hive Query Language; Giraph for graph analysis and Cassandra, a hybrid of a key value and a column oriented database. Pig was originally developed at Yahoo! and excels in some areas Hive does not. Pig jobs are written in a procedural language called Pig Latin. This wins developers a bunch of things. Among them are the ability to be more explicit about the execution of our data processing. Which is not possible in a declarative language like SQL syntax. And also the ability to split your data pipeline.
# Recap from IPython.display import HTML HTML('<iframe width="798" height="449" src="https://www.youtube.com/embed/Pl68U2iGtyI" frameborder="0" allowfullscreen></iframe>')
1-uIDS-courseNotes/l5-MapReduce.ipynb
tanle8/Data-Science
mit
4e64bbb6a9f2d751ec24b1a75f32e726
Expected Output: <table> <tr> <td > **W1** </td> <td > [[ 1.63535156 -0.62320365 -0.53718766] [-1.07799357 0.85639907 -2.29470142]] </td> </tr> <tr> <td > **b1** </td> <td > [[ 1.74604067] [-0.75184921]] </td> </tr> <tr> <td > **W2** </td> <td > [[ 0.32171798 -0.25467393 1.46902454] [-2.05617317 -0.31554548 -0.3756023 ] [ 1.1404819 -1.09976462 -0.1612551 ]] </td> </tr> <tr> <td > **b2** </td> <td > [[-0.88020257] [ 0.02561572] [ 0.57539477]] </td> </tr> </table> A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent. (Batch) Gradient Descent: ``` python X = data_input Y = labels parameters = initialize_parameters(layers_dims) for i in range(0, num_iterations): # Forward propagation a, caches = forward_propagation(X, parameters) # Compute cost. cost = compute_cost(a, Y) # Backward propagation. grads = backward_propagation(a, caches, parameters) # Update parameters. parameters = update_parameters(parameters, grads) ``` Stochastic Gradient Descent: python X = data_input Y = labels parameters = initialize_parameters(layers_dims) for i in range(0, num_iterations): for j in range(0, m): # Forward propagation a, caches = forward_propagation(X[:,j], parameters) # Compute cost cost = compute_cost(a, Y[:,j]) # Backward propagation grads = backward_propagation(a, caches, parameters) # Update parameters. parameters = update_parameters(parameters, grads) In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will "oscillate" toward the minimum rather than converge smoothly. Here is an illustration of this: <img src="images/kiank_sgd.png" style="width:750px;height:250px;"> <caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : SGD vs GD<br> "+" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). </center></caption> Note also that implementing SGD requires 3 for-loops in total: 1. Over the number of iterations 2. Over the $m$ training examples 3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$) In practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples. <img src="images/kiank_minibatch.png" style="width:750px;height:250px;"> <caption><center> <u> <font color='purple'> Figure 2 </u>: <font color='purple'> SGD vs Mini-Batch GD<br> "+" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. </center></caption> <font color='blue'> What you should remember: - The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step. - You have to tune a learning rate hyperparameter $\alpha$. - With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large). 2 - Mini-Batch Gradient descent Let's learn how to build mini-batches from the training set (X, Y). There are two steps: - Shuffle: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches. <img src="images/kiank_shuffle.png" style="width:550px;height:300px;"> Partition: Partition the shuffled (X, Y) into mini-batches of size mini_batch_size (here 64). Note that the number of training examples is not always divisible by mini_batch_size. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full mini_batch_size, it will look like this: <img src="images/kiank_partition.png" style="width:550px;height:300px;"> Exercise: Implement random_mini_batches. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches: python first_mini_batch_X = shuffled_X[:, 0 : mini_batch_size] second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size] ... Note that the last mini-batch might end up smaller than mini_batch_size=64. Let $\lfloor s \rfloor$ represents $s$ rounded down to the nearest integer (this is math.floor(s) in Python). If the total number of examples is not a multiple of mini_batch_size=64 then there will be $\lfloor \frac{m}{mini_batch_size}\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini__batch__size \times \lfloor \frac{m}{mini_batch_size}\rfloor$).
# GRADED FUNCTION: random_mini_batches def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0): """ Creates a list of random minibatches from (X, Y) Arguments: X -- input data, of shape (input size, number of examples) Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples) mini_batch_size -- size of the mini-batches, integer Returns: mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y) """ np.random.seed(seed) # To make your "random" minibatches the same as ours m = X.shape[1] # number of training examples mini_batches = [] # Step 1: Shuffle (X, Y) permutation = list(np.random.permutation(m)) shuffled_X = X[:, permutation] shuffled_Y = Y[:, permutation].reshape((1,m)) # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case. num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning for k in range(0, num_complete_minibatches): ### START CODE HERE ### (approx. 2 lines) mini_batch_X = shuffled_X[:,k * mini_batch_size:(k + 1) * mini_batch_size] mini_batch_Y = shuffled_Y[:,k * mini_batch_size:(k + 1) * mini_batch_size] ### END CODE HERE ### mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) # Handling the end case (last mini-batch < mini_batch_size) if m % mini_batch_size != 0: #end = m - mini_batch_size * math.floor(m / mini_batch_size) ### START CODE HERE ### (approx. 2 lines) mini_batch_X = shuffled_X[:,num_complete_minibatches * mini_batch_size:] mini_batch_Y = shuffled_Y[:,num_complete_minibatches * mini_batch_size:] ### END CODE HERE ### mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) return mini_batches X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case() mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size) print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape)) print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape)) print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape)) print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape)) print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape)) print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape)) print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3]))
deep-learnining-specialization/2. improving deep neural networks/week2/Optimization methods.ipynb
diegocavalca/Studies
cc0-1.0
ff99efe5d6f27ead9deed56d9076a140
Expected Output: <table style="width:40%"> <tr> <td > **v["dW1"]** </td> <td > [[ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td > **v["db1"]** </td> <td > [[ 0.] [ 0.]] </td> </tr> <tr> <td > **v["dW2"]** </td> <td > [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td > **v["db2"]** </td> <td > [[ 0.] [ 0.] [ 0.]] </td> </tr> <tr> <td > **s["dW1"]** </td> <td > [[ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td > **s["db1"]** </td> <td > [[ 0.] [ 0.]] </td> </tr> <tr> <td > **s["dW2"]** </td> <td > [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td > **s["db2"]** </td> <td > [[ 0.] [ 0.] [ 0.]] </td> </tr> </table> Exercise: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$: $$\begin{cases} v_{W^{[l]}} = \beta_1 v_{W^{[l]}} + (1 - \beta_1) \frac{\partial J }{ \partial W^{[l]} } \ v^{corrected}{W^{[l]}} = \frac{v{W^{[l]}}}{1 - (\beta_1)^t} \ s_{W^{[l]}} = \beta_2 s_{W^{[l]}} + (1 - \beta_2) (\frac{\partial J }{\partial W^{[l]} })^2 \ s^{corrected}{W^{[l]}} = \frac{s{W^{[l]}}}{1 - (\beta_2)^t} \ W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}{W^{[l]}}}{\sqrt{s^{corrected}{W^{[l]}}}+\varepsilon} \end{cases}$$ Note that the iterator l starts at 0 in the for loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift l to l+1 when coding.
# GRADED FUNCTION: update_parameters_with_adam def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01, beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8): """ Update parameters using Adam Arguments: parameters -- python dictionary containing your parameters: parameters['W' + str(l)] = Wl parameters['b' + str(l)] = bl grads -- python dictionary containing your gradients for each parameters: grads['dW' + str(l)] = dWl grads['db' + str(l)] = dbl v -- Adam variable, moving average of the first gradient, python dictionary s -- Adam variable, moving average of the squared gradient, python dictionary learning_rate -- the learning rate, scalar. beta1 -- Exponential decay hyperparameter for the first moment estimates beta2 -- Exponential decay hyperparameter for the second moment estimates epsilon -- hyperparameter preventing division by zero in Adam updates Returns: parameters -- python dictionary containing your updated parameters v -- Adam variable, moving average of the first gradient, python dictionary s -- Adam variable, moving average of the squared gradient, python dictionary """ L = len(parameters) // 2 # number of layers in the neural networks v_corrected = {} # Initializing first moment estimate, python dictionary s_corrected = {} # Initializing second moment estimate, python dictionary # Perform Adam update on all parameters for l in range(L): # Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v". ### START CODE HERE ### (approx. 2 lines) v["dW" + str(l+1)] = beta1 * v["dW" + str(l+1)] + (1 - beta1) * grads["dW" + str(l+1)] v["db" + str(l+1)] = beta1 * v["db" + str(l+1)] + (1 - beta1) * grads["db" + str(l+1)] ### END CODE HERE ### # Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected". ### START CODE HERE ### (approx. 2 lines) v_corrected["dW" + str(l+1)] = v["dW" + str(l+1)]/(1 - np.power(beta1, t)) v_corrected["db" + str(l+1)] = v["db" + str(l+1)]/(1 - np.power(beta1, t)) ### END CODE HERE ### # Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s". ### START CODE HERE ### (approx. 2 lines) s["dW" + str(l+1)] = beta2 * s["dW" + str(l+1)] + (1 - beta2) * np.square(grads["dW" + str(l+1)]) s["db" + str(l+1)] = beta2 * s["db" + str(l+1)] + (1 - beta2) * np.square(grads["db" + str(l+1)]) ### END CODE HERE ### # Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected". ### START CODE HERE ### (approx. 2 lines) s_corrected["dW" + str(l+1)] = s["dW" + str(l+1)]/(1 - np.power(beta2, t)) s_corrected["db" + str(l+1)] = s["db" + str(l+1)]/(1 - np.power(beta2, t)) ### END CODE HERE ### # Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters". ### START CODE HERE ### (approx. 2 lines) parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * v_corrected["dW" + str(l+1)] / np.sqrt(s_corrected["dW" + str(l+1)] + epsilon) parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * v_corrected["db" + str(l+1)] / np.sqrt(s_corrected["db" + str(l+1)] + epsilon) ### END CODE HERE ### return parameters, v, s parameters, grads, v, s = update_parameters_with_adam_test_case() parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) print("v[\"dW1\"] = " + str(v["dW1"])) print("v[\"db1\"] = " + str(v["db1"])) print("v[\"dW2\"] = " + str(v["dW2"])) print("v[\"db2\"] = " + str(v["db2"])) print("s[\"dW1\"] = " + str(s["dW1"])) print("s[\"db1\"] = " + str(s["db1"])) print("s[\"dW2\"] = " + str(s["dW2"])) print("s[\"db2\"] = " + str(s["db2"]))
deep-learnining-specialization/2. improving deep neural networks/week2/Optimization methods.ipynb
diegocavalca/Studies
cc0-1.0
290dcbb997b2274e42ca9126abff8ae1
Filter for stoichiometric compounds only:
def is_stoichiometric(composition): return np.all(np.mod(composition.values(), 1) == 0) stoichiometric_compositions = [c for c in compositions if is_stoichiometric(c)] print("Number of stoichiometric compositions: {}".format(len(stoichiometric_compositions))) ternaries = set(c.formula for c in stoichiometric_compositions) print("Number of unique stoichiometric compositions: {}".format(len(ternaries))) data_stoichiometric = [x for x in data if is_stoichiometric(Composition(x[2]))] from collections import Counter struct_type_freq = Counter(x[3] for x in data_stoichiometric if x[3] is not '') plt.loglog(range(1, len(struct_type_freq)+1), sorted(struct_type_freq.values(), reverse = True), 'o') plt.xlabel("Structure Type") plt.ylabel("Structure Type Frequency") plt.title("Distribution of Frequencies of Structure Types") sorted(struct_type_freq.items(), key = lambda x: x[1], reverse = True) uniq_phases = set() for row in data_stoichiometric: spacegroup, formula, struct_type = row[1:4] phase = (spacegroup, Composition(formula).formula, struct_type) uniq_phases.add(phase) uniq_struct_type_freq = Counter(x[2] for x in uniq_phases if x[2] is not '') uniq_struct_type_freq_sorted = sorted(uniq_struct_type_freq.items(), key = lambda x: x[1], reverse = True) plt.loglog(range(1, len(uniq_struct_type_freq_sorted)+1), [x[1] for x in uniq_struct_type_freq_sorted], 'o') plt.xlabel("Structure Type") plt.ylabel("Structure Type Frequency") plt.title("Distribution of Frequencies of Structure Types") uniq_struct_type_freq_sorted for struct_type,freq in uniq_struct_type_freq_sorted[:10]: print("{} : {}".format(struct_type, freq)) fffs = [p[1] for p in uniq_phases if p[2] == struct_type] fmt = " ".join(["{:14}"]*5) print(fmt.format(*fffs[0:5])) print(fmt.format(*fffs[5:10])) print(fmt.format(*fffs[10:15])) print(fmt.format(*fffs[15:20]))
notebooks/old_ICSD_Notebooks/Understanding ICSD data.ipynb
3juholee/materialproject_ml
mit
dadaddc73b9cf2f2d1916b29319d0154
3. Calculate the basic descriptive statistics on the data
df.mean() df.median() #range df["Exposure"].max() - df["Exposure"].min() #range df["Mortality"].max() - df["Mortality"].min() df.std() df.corr()
class7/donow/benzaquen_mercy_donow_7.ipynb
ledeprogram/algorithms
gpl-3.0
6f4373f9fbcce28e3081b94bfe14e006
4. Find a reasonable threshold to say exposure is high and recode the data
#IQR IQR= df['Exposure'].quantile(q=0.75)- df['Exposure'].quantile(q=0.25)
class7/donow/benzaquen_mercy_donow_7.ipynb
ledeprogram/algorithms
gpl-3.0
5fa2a0e527d29ee840e3efac81e63136
UAL= (IQR * 1.5) +Q3 LAL= Q1- (IQR * 1.5) Anything outside of UAL and LAL is an outlier
Q1= df['Exposure'].quantile(q=0.25) #1st Quartile Q1 Q2= df['Exposure'].quantile(q=0.5) #2nd Quartile (Median) Q3= df['Exposure'].quantile(q=0.75) #3rd Quartile UAL= (IQR * 1.5) +Q3 UAL LAL= Q1- (IQR * 1.5) LAL
class7/donow/benzaquen_mercy_donow_7.ipynb
ledeprogram/algorithms
gpl-3.0
2fa0348906a8e21aca5aace1b53148fa
This notebook reviews some of the Python modules that make it possible to work with data structures in an easy an efficient manner. We will review Numpy arrays and matrices, and some of the common operations which are needed when working with these data structures in Machine Learning. 1. Create numpy arrays of different types The following code fragment defines variable x as a list of 4 integers, you can check that by printing the type of any element of x. Use python command map() to create a new list with the same elements as x, but where each element of the list is a float. Note that, since in Python 3 map() returns an iterable object, you need to call function list() to populate the list.
x = [5, 4, 3, 4] print(type(x[0])) # Create a list of floats containing the same elements as in x # x_f = list(map(<FILL IN>)) x_f = list(map(float, x)) test_arrayequal(x, x_f, 'Elements of both lists are not the same') if ((type(x[-2])==int) & (type(x_f[-2])==float)): print('Test passed') else: print('Type conversion incorrect')
P2.Numpy/old/numpy_professor.ipynb
ML4DS/ML4all
mit
6f370cbefe1581c8f64b5f93557e8b24
Numpy arrays can be defined directly using methods such as np.arange(), np.ones(), np.zeros(), as well as random number generators. Alternatively, you can easily generate them from python lists (or lists of lists) containing elements of numeric type. You can easily check the shape of any numpy vector with the property .shape, and reshape it with the method reshape(). Note the difference between 1-D and N-D numpy arrays (ndarrays). You should also be aware of the existence of another numpy data type: Numpy matrices (http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.matrix.html) are inherently 2-D structures where operators * and ** have the meaning of matrix multiplication and matrix power. In the code below, you can check the types and shapes of different numpy arrays. Complete also the exercise where you are asked to convert a unidimensional array into a vector of size $4\times2$.
# Numpy arrays can be created from numeric lists or using different numpy methods y = np.arange(8)+1 x = np.array(x_f) # Check the different data types involved print('Variable x_f is of type', type(x_f)) print('Variable x is of type ', type(x)) print('Variable y is of type', type(y)) # Print the shapes of the numpy arrays print('Variable y has dimension', y.shape) print('Variable x has dimension', x.shape) #Complete the following exercises # Convert x into a variable x_matrix, of type `numpy.matrixlib.defmatrix.matrix` using command # np.matrix(). The resulting matrix should be of dimensions 4x1 # x_matrix = <FILL IN> x_matrix = np.matrix(x).T # Convert x into a variable x_array, of type `ndarray`, and shape (4,1) # x_array = <FILL IN> x_array = x[:,np.newaxis] # Reshape array y into a numpy array of shape (4,2) using command np.reshape() # y = <FILL IN> y = y.reshape((4,2)) test_strequal(str(type(x_matrix)), "<class 'numpy.matrixlib.defmatrix.matrix'>", 'x_matrix is not defined as a matrix') test_hashedequal(x_matrix.tostring(), '1215ced5d82501bf03e04b30f16c45a4bdcb8838', 'Incorrect variable x_matrix') test_strequal(str(type(x_array)), "<class 'numpy.ndarray'>", 'x_array is not defined as numpy ndarray') test_hashedequal(x_array.tostring(), '1215ced5d82501bf03e04b30f16c45a4bdcb8838', 'Incorrect variable x_array') test_strequal(str(type(y)), "<class 'numpy.ndarray'>", 'y is not defined as a numpy ndarray') test_hashedequal(y.tostring(), '0b61a85386775357e0710800497771a34fdc8ae5', 'Incorrect variable y')
P2.Numpy/old/numpy_professor.ipynb
ML4DS/ML4all
mit
93b761962c49e0460dbe2e416881b89d
2. Products and powers of numpy arrays and matrices * and ** when used with Numpy arrays implement elementwise product and exponentiation * and ** when used with Numpy matrices implement matrix product and exponentiation Method np.dot() implements matrix multiplication, and can be used both with numpy arrays and matrices. So you have to be careful about the types you are using for each variable
# Try to run the following command on variable x_matrix, and check what happens print(x_array**2) print('Remember that the shape of x_array is', x_array.shape) print('Remember that the shape of y is', y.shape) # Complete the following exercises. You can print the partial results to visualize them # Multiply the 2-D array `y` by 2 # y_by2 = <FILL IN> y_by2 = y * 2 # Multiply each of the columns in `y` by the column vector x_array # z_4_2 = <FILL IN> z_4_2 = x_array * y # Obtain the matrix product of the transpose of x_array and y # x_by_y = <FILL IN> x_by_y = x_array.T.dot(y) # Repeat the previous calculation, this time using x_matrix (of type numpy matrix) instead of x_array # Note that in this case you do not need to use method dot() # x_by_y2 = <FILL IN> x_by_y2 = x_matrix.T * y # Multiply vector x_array by its transpose to obtain a 4 x 4 matrix #x_4_4 = <FILL IN> x_4_4 = x_array.dot(x_array.T) # Multiply the transpose of vector x_array by vector x_array. The result is the squared-norm of the vector #x_norm2 = <FILL IN> x_norm2 = x_array.T.dot(x_array) test_hashedequal(y_by2.tostring(),'1b54af8620657d5b8da424ca6be8d58b6627bf9a','Incorrect result for variable y_by2') test_hashedequal(z_4_2.tostring(),'0727ed01af0aa4175316d3916fd1c8fe2eb98f27','Incorrect result for variable z_4_2') test_hashedequal(x_by_y.tostring(),'b33f700fec2b6bd66e76260d31948ce07b8c15d3','Incorrect result for variable x_by_y') test_hashedequal(x_by_y2.tostring(),'b33f700fec2b6bd66e76260d31948ce07b8c15d3','Incorrect result for variable x_by_y2') test_hashedequal(x_4_4.tostring(),'832c97cc2d69298287838350b0bae66deec58b03','Incorrect result for variable x_4_4') test_hashedequal(x_norm2.tostring(),'33b80b953557002511474aa340441d5b0728bbaf','Incorrect result for variable x_norm2')
P2.Numpy/old/numpy_professor.ipynb
ML4DS/ML4all
mit
cbe359296767db6545d3f7478936d4b0
Other numpy methods where you can specify the axis along with a certain operation should be carried out are: np.median() np.std() np.var() np.percentile() np.sort() np.argsort() If the axis argument is not provided, the array is flattened before carriying out the corresponding operation. 4. Concatenating matrices and vectors Provided that the necessary dimensions fit, horizontal and vertical stacking of matrices can be carried out with methods np.hstack() and np.vstack(). Complete the following exercises to practice with matrix concatenation:
# Previous check that you are working with the right matrices test_hashedequal(z_4_2.tostring(),'0727ed01af0aa4175316d3916fd1c8fe2eb98f27','Incorrect result for variable z_4_2') test_hashedequal(x_array.tostring(), '1215ced5d82501bf03e04b30f16c45a4bdcb8838', 'Incorrect variable x_array') # Vertically stack matrix z_4_2 with itself # ex1_res = <FILL IN> ex1_res = np.vstack((z_4_2,z_4_2)) # Horizontally stack matrix z_4_2 and vector x_array # ex2_res = <FILL IN> ex2_res = np.hstack((z_4_2,x_array)) # Horizontally stack a column vector of ones with the result of the first exercise (variable ex1_res) # X = <FILL IN> X = np.hstack((np.ones((8,1)),ex1_res)) test_hashedequal(ex1_res.tostring(),'e740ea91c885cdae95499eaf53ec6f1429943d9c','Wrong value for variable ex1_res') test_hashedequal(ex2_res.tostring(),'d5f18a630b2380fcae912f449b2a87766528e0f2','Wrong value for variable ex2_res') test_hashedequal(X.tostring(),'bdf94b49c2b7c6ae71a916beb647236918ead39f','Wrong value for variable X')
P2.Numpy/old/numpy_professor.ipynb
ML4DS/ML4all
mit
06fa2e4be8bf7a9e261a32a4322f0c48
5. Slicing Particular elements of numpy arrays (both unidimensional and multidimensional) can be accessed using standard python slicing. When working with multidimensional arrays, slicing can be carried out along the different dimensions at once
# Keep last row of matrix X # X_sub1 = <FILL IN> X_sub1 = X[-1,] # Keep first column of the three first rows of X # X_sub2 = <FILL IN> X_sub2 = X[:3,0] # Keep first two columns of the three first rows of X # X_sub3 = <FILL IN> X_sub3 = X[:3,:2] # Invert the order of the rows of X # X_sub4 = <FILL IN> X_sub4 = X[::-1,:] test_hashedequal(X_sub1.tostring(),'51fb613567c9ef5fc33e7190c60ff37e0cd56706','Wrong value for variable X_sub1') test_hashedequal(X_sub2.tostring(),'12a72e95677fc01de6b7bfb7f62d772d0bdb5b87','Wrong value for variable X_sub2') test_hashedequal(X_sub3.tostring(),'f45247c6c31f9bcccfcb2a8dec9d288ea41e6acc','Wrong value for variable X_sub3') test_hashedequal(X_sub4.tostring(),'1fd985c087ba518c6d040799e49a967e4b1d433a','Wrong value for variable X_sub4')
P2.Numpy/old/numpy_professor.ipynb
ML4DS/ML4all
mit
180f67507e4c0a0692d17de6e351904f
7.1. Non-linear transformations Create a new matrix Z, where additional features are created by carrying out the following non-linear transformations: $${\bf Z} = \left[ \begin{array}{ccc} 1 & x_1^{(1)} & x_2^{(1)} & \log\left(x_1^{(1)}\right) & \log\left(x_2^{(1)}\right)\ 1 & x_1^{(2)} & x_2^{(2)} & \log\left(x_1^{(2)}\right) & \log\left(x_2^{(2)}\right) \ \vdots & \vdots & \vdots \ 1 & x_1^{(8)} & x_2^{(8)} & \log\left(x_1^{(8)}\right) & \log\left(x_2^{(8)}\right)\end{array}\right] = \left[ \begin{array}{ccc} 1 & z_1^{(1)} & z_2^{(1)} & z_3^{(1)} & z_4^{(1)}\ 1 & z_1^{(2)} & z_2^{(2)} & z_3^{(1)} & z_4^{(1)} \ \vdots & \vdots & \vdots \ 1 & z_1^{(8)} & z_2^{(8)} & z_3^{(1)} & z_4^{(1)} \end{array}\right]$$ In other words, we are calculating the logarightmic values of the two original variables. From now on, any function involving linear transformations of the variables in Z, will be in fact a non-linear function of the original variables.
# Obtain matrix Z using concatenation functions # Z = np.hstack(<FILL IN>) Z = np.hstack((X,np.log(X[:,1:]))) test_hashedequal(Z.tostring(),'737dee4c168c5ce8fc53a5ec5cad43b5a53c7656','Incorrect matrix Z')
P2.Numpy/old/numpy_professor.ipynb
ML4DS/ML4all
mit
a1f2a137921570853911d0b16f0e6a65
Repeat the previous exercise, this time using the map() method together with function log_transform(). This function needs to be defined in such a way that guarantees that variable Z_map is the same as the previously computed variable Z.
def log_transform(x): # return <FILL IN> return np.hstack((x,np.log(x[1]),np.log(x[2]))) Z_map = np.array(list(map(log_transform,X))) test_hashedequal(Z_map.tostring(),'737dee4c168c5ce8fc53a5ec5cad43b5a53c7656','Incorrect matrix Z')
P2.Numpy/old/numpy_professor.ipynb
ML4DS/ML4all
mit
fd8a6285f71700397751b2c63249d5a8
Repeat the previous exercise once more. This time, define a lambda function for the task.
# Z_lambda = np.array(list(map(lambda x: <FILL IN>,X))) Z_lambda = np.array(list(map(lambda x: np.hstack((x,np.log(x[1]),np.log(x[2]))),X))) test_hashedequal(Z_lambda.tostring(),'737dee4c168c5ce8fc53a5ec5cad43b5a53c7656','Incorrect matrix Z')
P2.Numpy/old/numpy_professor.ipynb
ML4DS/ML4all
mit
6ffdccff2f7b34492c3d493769d06c12
7.2. Polynomial transformations Similarly to the previous exercise, now we are interested in obtaining another matrix that will be used to evaluate a polynomial model. In order to do so, compute matrix Z_poly as follows: $$Z_\text{poly} = \left[ \begin{array}{cccc} 1 & x_1^{(1)} & (x_1^{(1)})^2 & (x_1^{(1)})^3 \ 1 & x_1^{(2)} & (x_1^{(2)})^2 & (x_1^{(2)})^3 \ \vdots & \vdots & \vdots \ 1 & x_1^{(8)} & (x_1^{(8)})^2 & (x_1^{(8)})^3 \end{array}\right]$$ Note that, in this case, only the first variable of each pattern is used.
# Calculate variable Z_poly, using any method that you want # Z_poly = <FILL IN> Z_poly = np.array(list(map(lambda x: np.array([x[1]**k for k in range(4)]),X))) test_hashedequal(Z_poly.tostring(),'7e025512fcee1c1db317a1a30f01a0d4b5e46e67','Wrong variable Z_poly')
P2.Numpy/old/numpy_professor.ipynb
ML4DS/ML4all
mit
1d43423187238103fe1fbef624b5bc74
7.3. Model evaluation Finally, we can use previous data matrices Z and Z_poly to efficiently compute the output of the corresponding non-linear models over all the patterns in the data set. In this exercise, we consider the two following linear-in-the-parameters models to be evaluated: $$f_\text{log}({\bf x}) = w_0 + w_1 \cdot x_1 + w_2 \cdot x_2 + w_3 \cdot \log(x_1) + w_4 \cdot \log(x_2)$$ $$f_\text{poly}({\bf x}) = w_0 + w_1 \cdot x_1 + w_2 \cdot x_1^2 + w_3 \cdot x_1^3$$ Compute the output of the two models for the particular weights that are defined in the code below. Your output variables f_log and f_poly should contain the outputs of the model for all eight patterns in the data set. Note that for this task, you just need to implement appropriate matricial products among the extended data matrices, Z and Z_poly, and the provided weight vectors.
w_log = np.array([3.3, 0.5, -2.4, 3.7, -2.9]) w_poly = np.array([3.2, 4.5, -3.2, 0.7]) # f_log = <FILL IN> f_log = Z.dot(w_log) # f_poly = <FILL IN> f_poly = Z_poly.dot(w_poly) test_hashedequal(f_log.tostring(),'d5801dfbd603f6db7010b9ef80fa48e351c0b38b','Incorrect evaluation of the logarithmic model') test_hashedequal(f_poly.tostring(),'32abdcc0e32e76500947d0691cfa9917113d7019','Incorrect evaluation of the polynomial model')
P2.Numpy/old/numpy_professor.ipynb
ML4DS/ML4all
mit
3f413729afe6ffba594a589df26ef709
<p>Agora vamos ler os dois corpus e armazenar as sentenças em uma mesma ndarray. Perceba que também teremos uma ndarray para indicar se o texto é formal ou não. Começamos armazenando o corpus em lists. Vamos usar apenas 500 elementos de cada, para fins didáticos.</p>
import nltk x_data_nps = [] for fileid in nltk.corpus.nps_chat.fileids(): x_data_nps.extend([post.text for post in nps_chat.xml_posts(fileid)]) y_data_nps = [0] * len(x_data_nps) x_data_gut = [] for fileid in nltk.corpus.gutenberg.fileids(): x_data_gut.extend([' '.join(sent) for sent in nltk.corpus.gutenberg.sents(fileid)]) y_data_gut = [1] * len(x_data_gut) x_data_full = x_data_nps[:500] + x_data_gut[:500] print(len(x_data_full)) y_data_full = y_data_nps[:500] + y_data_gut[:500] print(len(y_data_full))
nlp_classification_pt-br.ipynb
fernandojvdasilva/nlp-python-lectures
gpl-3.0
53ba25f8df250487ad25c15f6f89ccd3
<p>Em seguida, transformamos essas listas em ndarrays, para usarmos nas etapas de pré-processamento que já conhecemos.</p>
import numpy as np x_data = np.array(x_data_full, dtype=object) #x_data = np.array(x_data_full) print(x_data.shape) y_data = np.array(y_data_full) print(y_data.shape)
nlp_classification_pt-br.ipynb
fernandojvdasilva/nlp-python-lectures
gpl-3.0
7bde92bb6e3431575cc89ac2b03ace56
<b>2. Dividindo em datasets de treino e teste</b> <p>Para que a pesquisa seja confiável, precisamos avaliar os resultados em um dataset de teste. Por isso, vamos dividir os dados aleatoriamente, deixando 80% para treino e o demais para testar os resultados em breve.</p>
train_indexes = np.random.rand(len(x_data)) < 0.80 print(len(train_indexes)) print(train_indexes[:10]) x_data_train = x_data[train_indexes] y_data_train = y_data[train_indexes] print(len(x_data_train)) print(len(y_data_train)) x_data_test = x_data[~train_indexes] y_data_test = y_data[~train_indexes] print(len(x_data_test)) print(len(y_data_test))
nlp_classification_pt-br.ipynb
fernandojvdasilva/nlp-python-lectures
gpl-3.0
34b59d379e4cafd781e37a2bf9582523
<b>3. Treinando o classificador</b> <p>Para tokenização, vamos usar a mesma função do tutorial anterior:</p>
from nltk import pos_tag from nltk.corpus import stopwords from nltk.stem import WordNetLemmatizer from nltk.tokenize import word_tokenize import string from nltk.corpus import wordnet stopwords_list = stopwords.words('english') lemmatizer = WordNetLemmatizer() def my_tokenizer(doc): words = word_tokenize(doc) pos_tags = pos_tag(words) non_stopwords = [w for w in pos_tags if not w[0].lower() in stopwords_list] non_punctuation = [w for w in non_stopwords if not w[0] in string.punctuation] lemmas = [] for w in non_punctuation: if w[1].startswith('J'): pos = wordnet.ADJ elif w[1].startswith('V'): pos = wordnet.VERB elif w[1].startswith('N'): pos = wordnet.NOUN elif w[1].startswith('R'): pos = wordnet.ADV else: pos = wordnet.NOUN lemmas.append(lemmatizer.lemmatize(w[0], pos)) return lemmas
nlp_classification_pt-br.ipynb
fernandojvdasilva/nlp-python-lectures
gpl-3.0
fb5c9cefbe8ce217329a03e40a6e11d5
<p>Mas agora vamos criar um <b>pipeline</b> contendo o vetorizador TF-IDF, o SVD para redução de atributos e um algoritmo de classificação. Mas antes, vamos encapsular nosso algoritmo para escolher o número de dimensões para o SVD em uma classe que pode ser utilizada com o pipeline:</p>
from sklearn.decomposition import TruncatedSVD class SVDDimSelect(object): def fit(self, X, y=None): self.svd_transformer = TruncatedSVD(n_components=X.shape[1]/2) self.svd_transformer.fit(X) cummulative_variance = 0.0 k = 0 for var in sorted(self.svd_transformer.explained_variance_ratio_)[::-1]: cummulative_variance += var if cummulative_variance >= 0.5: break else: k += 1 self.svd_transformer = TruncatedSVD(n_components=k) return self.svd_transformer.fit(X) def transform(self, X, Y=None): return self.svd_transformer.transform(X) def get_params(self, deep=True): return {}
nlp_classification_pt-br.ipynb
fernandojvdasilva/nlp-python-lectures
gpl-3.0
fcd1d23579d67d0e15603175180814b9
<p>Finalmente podemos criar nosso pipeline:</p>
from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.pipeline import Pipeline from sklearn import neighbors clf = neighbors.KNeighborsClassifier(n_neighbors=10, weights='uniform') my_pipeline = Pipeline([('tfidf', TfidfVectorizer(tokenizer=my_tokenizer)),\ ('svd', SVDDimSelect()), \ ('clf', clf)])
nlp_classification_pt-br.ipynb
fernandojvdasilva/nlp-python-lectures
gpl-3.0
66484e4c685036f72cdedf3cf9d955f3
<p>Estamos quase lá... Agora vamos criar um objeto <b>RandomizedSearchCV</b> que fará a seleção de hiper-parâmetros do nosso classificador (aka. parâmetros que não são aprendidos durante o treinamento). Essa etapa é importante para obtermos a melhor configuração do algoritmo de classificação. Para economizar tempo de treinamento, vamos usar um algoritmo simples o <i>K nearest neighbors (KNN)</i>.
from sklearn.grid_search import RandomizedSearchCV import scipy par = {'clf__n_neighbors': range(1, 60), 'clf__weights': ['uniform', 'distance']} hyperpar_selector = RandomizedSearchCV(my_pipeline, par, cv=3, scoring='accuracy', n_jobs=2, n_iter=20)
nlp_classification_pt-br.ipynb
fernandojvdasilva/nlp-python-lectures
gpl-3.0
b148a076dc49d5577346bba954b66c5d
<p>E agora vamos treinar nosso algoritmo, usando o pipeline com seleção de atributos:</p>
#print(hyperpar_selector) hyperpar_selector.fit(X=x_data_train, y=y_data_train) print("Best score: %0.3f" % hyperpar_selector.best_score_) print("Best parameters set:") best_parameters = hyperpar_selector.best_estimator_.get_params() for param_name in sorted(par.keys()): print("\t%s: %r" % (param_name, best_parameters[param_name]))
nlp_classification_pt-br.ipynb
fernandojvdasilva/nlp-python-lectures
gpl-3.0
8b6d01f7dc4f542db85f1f0c00966e37
<b>4. Testando o classificador</b> <p>Agora vamos usar o classificador com o nosso dataset de testes, e observar os resultados:</p>
from sklearn.metrics import * y_pred = hyperpar_selector.predict(x_data_test) print(accuracy_score(y_data_test, y_pred))
nlp_classification_pt-br.ipynb
fernandojvdasilva/nlp-python-lectures
gpl-3.0
4f9275d3539e2db20b4555a32ac7d223
<b>5. Serializando o modelo</b><br>
import pickle string_obj = pickle.dumps(hyperpar_selector) model_file = open('model.pkl', 'wb') model_file.write(string_obj) model_file.close()
nlp_classification_pt-br.ipynb
fernandojvdasilva/nlp-python-lectures
gpl-3.0
b792e013bcc58b3dced7779487398206
<b>6. Abrindo e usando um modelo salvo </b><br>
model_file = open('model.pkl', 'rb') model_content = model_file.read() obj_classifier = pickle.loads(model_content) model_file.close() res = obj_classifier.predict(["what's up bro?"]) print(res) res = obj_classifier.predict(x_data_test) print(accuracy_score(y_data_test, res)) res = obj_classifier.predict(x_data_test) print(res) formal = [x_data_test[i] for i in range(len(res)) if res[i] == 1] for txt in formal: print("%s\n" % txt) informal = [x_data_test[i] for i in range(len(res)) if res[i] == 0] for txt in informal: print("%s\n" % txt) res2 = obj_classifier.predict(["Emma spared no exertions to maintain this happier flow of ideas , and hoped , by the help of backgammon , to get her father tolerably through the evening , and be attacked by no regrets but her own"]) print(res2)
nlp_classification_pt-br.ipynb
fernandojvdasilva/nlp-python-lectures
gpl-3.0
0dab3676092c743ac730280b10c46800
Data frames
#import data and then display each data frame path1 = 'data/fbi_table_20years.xlsx' df_20yr = pd.read_excel(path1, index_col=0) path2 = 'data/fbi_table_20years_edited.xlsx' df_20yr_real = pd.read_excel(path2, index_col=0) path3 = 'data/fbi_table_20years_rates.xlsx' df_20yr_rates = pd.read_excel(path3, index_col=0) path4 = 'data/CDS_Data.xlsx' df_CDC = pd.read_excel(path4, index_col=0) df_20yr df_20yr_real df_20yr_rates df_CDC
UG_F16/Kustas-Madej-CrimeRatesFinalProject.ipynb
NYUDataBootcamp/Projects
mit
1083c854849b17cd755e92926ca23836
Line Chart: Crime rate (1994-2013)
#create a line plot from crime rates data frame fig, ax = plt.subplots() df_20yr_rates.plot(ax=ax, kind='line', # line plot title='Different Crimes vs. Time\n\n', grid = True, ylim = (-50,3100), marker = 'o', use_index = True) plt.legend(loc = 'upper right') ax.set_title('Crime rates over time\n',fontsize = 16) #format title and axis labels ax.set_xlabel('Year', fontsize = 14) ax.set_ylabel('Crime Rate', fontsize = 14) ax.set_xlim(1994, 2013) #set limits for x and y axis ax.set_ylim(-50,3100) fig.set_size_inches(15, 13)
UG_F16/Kustas-Madej-CrimeRatesFinalProject.ipynb
NYUDataBootcamp/Projects
mit
9e9950386b84ca707c6dd0db528eeda4
Analysis: In the above graph, we can observe a steady decline (despite a few isolated increases) in crime rates across different categories of crime from 1994 to 2013. A number of explanations have been proposed to explain the trend. Historian Neil Howe has suggested that decline might come from the entrance of millennials into the potential criminal demographic. Both will be explored in further detail later in this project. Pie Chart: Breakdown of crime type
#find totals of each column in order to find which crime was most prevalent over the course of the past 20 years murder_total = 0 rape_total = 0 robbery_total = 0 agg_ass_total = 0 burglary_total = 0 larceny_total = 0 veh_total = 0 totals_list = [] list_total = 0 #find total number of murders for i in (df_20yr_real.index): murder_total += df_20yr_real['Murder and\nnonnegligent \nmanslaughter'][i] list_total += murder_total totals_list.append(murder_total) #find total number of rapes for i in (df_20yr_real.index): rape_total += df_20yr_real['Rape\n(legacy\ndefinition)2'][i] list_total += rape_total totals_list.append(rape_total) #find total number of robberies for i in (df_20yr_real.index): robbery_total += df_20yr_real['Robbery'][i] list_total += robbery_total totals_list.append(robbery_total) #find total number of assaults for i in (df_20yr_real.index): agg_ass_total += df_20yr_real['Aggravated \nassault'][i] list_total += agg_ass_total totals_list.append(agg_ass_total) #find total number of burglaries for i in (df_20yr_real.index): burglary_total += df_20yr_real['Burglary'][i] list_total += burglary_total totals_list.append(burglary_total) #find total number of larcenies for i in (df_20yr_real.index): larceny_total += df_20yr_real['Larceny-\ntheft'][i] list_total += larceny_total totals_list.append(larceny_total) #find total number of vehicle thefts for i in (df_20yr_real.index): veh_total += df_20yr_real['Motor \nvehicle \ntheft'][i] list_total += veh_total totals_list.append(veh_total) #plot pie chart using above data k = ['Murder and nonnegligent manslaughter', 'Rape', 'Robbery', 'Aggravated assault', 'Burglary', \ 'Larceny theft', 'Motor vehicle theft'] percent_list = [] for i in totals_list: percent = i/list_total percent_list.append(percent) #convert values to percentages arr = np.array(percent_list) percent = 100.*arr/arr.sum() labels = ['{0} : {1:1.2f}%'.format(x,y) for x,y in zip(k, percent)] colours = ['red','black', 'green', 'lightskyblue', 'yellow', 'purple', 'darkblue'] #style the pie chart patches, texts = plt.pie(totals_list, colors=colours, startangle=90) fig = plt.gcf() fig.set_size_inches(7.5, 7.5) plt.legend(patches, labels, loc="best", bbox_to_anchor=(1.02, 0.94), borderaxespad=0) plt.axis('equal') plt.title('Prevalence of Various Crimes: 1994-2013 (as percentage of total crime)\n', fontsize = 16) plt.tight_layout() plt.show()
UG_F16/Kustas-Madej-CrimeRatesFinalProject.ipynb
NYUDataBootcamp/Projects
mit
b64eb1a25513ab15f959221523dd2b3d
Analysis: Here we can see the relative prevalence of various types of crime in the United States. Larceny theft accounts for over 50% of the crime committed in the US over the relevant 20-year period followed by burglary and motor vehicle theft contributing about 19% and about 10%, respectively. Rape, murder, aggravated assault, and robbery each contributed about 1%, 0.14%, about 8% and around 4% as well. Bar Graph: Yearly percent change in total crime (1994-2013)
#calculate total number of crimes per year row_total = 0 row_total_list = [] count = 0 for i in (df_20yr_real.index): for x in (df_20yr_real.columns): row_total += df_20yr_real[x][i] row_total_list.append(row_total) row_total = 0 #calculate percent change in crimes between each year and then add to new column in data frame percent_change_list = [] for k in range(0,len(row_total_list)): if k > 0: percent_change = (((row_total_list[k]/row_total_list[k-1]) - 1) * -1) * 100 if percent_change < 0: percent_change = 0.0 percent_change_list.append(percent_change) count+=1 else: percent_change_list.append(0.0) count+=1 # add the percent change column to our data frame #df_20yr_real['Percent Change'] = percent_change_list #del df_20yr_real['Percent Change'] #plot bar graph using above percent change data fig, ax = plt.subplots() fig.set_size_inches(16, 6.5) df_20yr_real['Percent Change'].plot(kind='bar', ax=ax, legend = False, color = ['blue','purple'], alpha = 0.65, rot = 0, width = 0.9, align = 'center') plt.style.use('bmh') ax.set_xlabel('Year', fontsize = 14) ax.set_ylabel('Percent Change', fontsize = 14) #style bar graph ax.set_title('Yearly change in total crime\n', fontsize = 16) ax.set_ylim(0,7)
UG_F16/Kustas-Madej-CrimeRatesFinalProject.ipynb
NYUDataBootcamp/Projects
mit
8d54833c34579fb6acc9bbf67d31a184
Analysis: We can see from the above bar chart that there was a substantial decrease in crime during the year 1997 and 1998, this could be attributed to a number of increasingly rigorous policing tactics around the country, Bratton’s Zero Tolerance policing in New York City for example. In addition to stricter policing which, according to some sources was controversial and led to an increase in dissent and crime, there was a large influx of millennials into the criminal age demographic (approximately 12-24 years of age) at which they are most likely to commit or be victims of violent crime. Line Chart: High schoolers partaking in risky behaviors
#create a line plot from CDC data frame fig, ax = plt.subplots() df_CDC.plot(ax=ax, kind='line', # line plot grid = True, marker = 'o', use_index = True) plt.legend(loc = 'upper right') #format legend ax.set_title('High schoolers partaking in risky behaviors',fontsize = 16) #format title and axis labels ax.set_xlabel('Year', fontsize = 14) ax.set_ylabel('Percent of Students', fontsize = 14) fig.set_size_inches(15, 8)
UG_F16/Kustas-Madej-CrimeRatesFinalProject.ipynb
NYUDataBootcamp/Projects
mit
4deec340ee7162d3a44b5b24844d0e18
Useful functions
def logistic_lin( x, a, b ): """Calculates the standard linear logistic function (probability distribution) for x (which can be a scalar or a numpy array). """ return 1.0 / (1.0 + np.exp(-(a + b*x))) def logistic_polyn( x, params ): """Calculates the general polynomial form of the logistic function (probability distribution) for x (which can be a scalar or a numpy array). """ order = len(params) - 1 logit = params[0] for n in range(order): b = params[n + 1] logit += b * x**(n + 1) return 1.0 / (1.0 + np.exp(-logit)) def GetBarazzaData( fname ): """Retrieve bar fractions and total galaxy counts per bin for Barazza+2008 data (their Fig. 19); calculates proper binomial confidence intervals. """ dlines = [line for line in open(fname) if line[0] != '#' and len(line) > 1] x = np.array([float(line.split()[0]) for line in dlines]) f = np.array([float(line.split()[1]) for line in dlines]) n = np.array([int(line.split()[2]) for line in dlines]) n_bars = np.round(f*n) e_low_vect = [] e_high_vect = [] for i in range(len(x)): dummy,e_low,e_high = s4gutils.Binomial(n_bars[i], n[i]) e_low_vect.append(e_low) e_high_vect.append(e_high) return (x, f, np.array(e_low_vect), np.array(e_high_vect))
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
ea340787c176a0485ec37054f2385cd5
Defining different subsamples via index vectors Lists of integers defining indices of galaxies in Parent Disc Sample which meet various criteria that define specific subsamples.
ii_barred = [i for i in range(nDisksTotal) if s4gdata.sma[i] > 0] ii_unbarred = [i for i in range(nDisksTotal) if s4gdata.sma[i] <= 0] ii_spirals = [i for i in range(nDisksTotal) if s4gdata.t_s4g[i] > -0.5] ii_barred_spirals = [i for i in ii_spirals if i in ii_barred] ii_unbarred_spirals = [i for i in ii_spirals if i in ii_unbarred] # limited sample 1: D < 25 Mpc -- 663 spirals: 373 barred, 290 unbarred ii_all_limited1 = [i for i in ii_spirals if s4gdata.dist[i] <= 25] ii_barred_limited1 = [i for i in ii_all_limited1 if i in ii_barred] ii_unbarred_limited1 = [i for i in ii_all_limited1 if i not in ii_barred] ii_SB_limited1 = [i for i in ii_all_limited1 if i in ii_barred_limited1 and s4gdata.bar_strength[i] == 1] ii_nonSB_limited1 = [i for i in ii_all_limited1 if i not in ii_SB_limited1] ii_SAB_limited1 = [i for i in ii_all_limited1 if i in ii_barred_limited1 and s4gdata.bar_strength[i] == 2] ii_nonSAB_limited1 = [i for i in ii_all_limited1 if i not in ii_SB_limited1] # S0 only (74 S0s: 27 barred, 47 unbarred) ii_all_limited1_S0 = [i for i in range(nDisksTotal) if s4gdata.dist[i] <= 25 and s4gdata.t_s4g[i] <= -0.5] ii_barred_limited1_S0 = [i for i in ii_all_limited1_S0 if i in ii_barred] ii_unbarred_limited1_S0 = [i for i in ii_all_limited1_S0 if i not in ii_barred] ii_SB_limited1_S0 = [i for i in ii_SB_limited1 if s4gdata.t_s4g[i] <= -0.5] ii_nonSB_limited1_S0 = [i for i in ii_nonSB_limited1 if s4gdata.t_s4g[i] <= -0.5] ii_SAB_limited1_S0 = [i for i in ii_SAB_limited1 if s4gdata.t_s4g[i] <= -0.5] ii_nonSAB_limited1_S0 = [i for i in ii_nonSAB_limited1 if s4gdata.t_s4g[i] <= -0.5] # limited subsample 1m: D < 25 Mpc and log Mstar >= 8.5 -- 576 spirals: 356 barred, 220 unbarred ii_all_limited1_m8_5 = [i for i in ii_all_limited1 if s4gdata.logmstar[i] >= 8.5] ii_barred_limited1_m8_5 = [i for i in ii_all_limited1_m8_5 if i in ii_barred] ii_unbarred_limited1_m8_5 = [i for i in ii_all_limited1_m8_5 if i not in ii_barred] ii_SB_limited1_m8_5 = [i for i in ii_all_limited1_m8_5 if i in ii_barred and s4gdata.bar_strength[i] == 1] ii_nonSB_limited1_m8_5 = [i for i in ii_all_limited1_m8_5 if i not in ii_SB_limited1_m8_5] ii_SAB_limited1_m8_5 = [i for i in ii_all_limited1_m8_5 if i in ii_barred and s4gdata.bar_strength[i] == 2] ii_nonSAB_limited1_m8_5 = [i for i in ii_all_limited1_m8_5 if i not in ii_SB_limited1_m8_5] # S0 only (74 S0s: 27 barred, 47 unbarred) ii_all_limited1_m8_5_S0 = [i for i in ii_all_limited1_S0 if s4gdata.logmstar[i] >= 8.5] ii_barred_limited1_m8_5_S0 = [i for i in ii_all_limited1_m8_5_S0 if i in ii_barred] ii_unbarred_limited1_m8_5_S0 = [i for i in ii_all_limited1_m8_5_S0 if i not in ii_barred] ii_SB_limited1_m8_5_S0 = [i for i in ii_all_limited1_m8_5_S0 if i in ii_barred and s4gdata.bar_strength[i] == 1] ii_nonSB_limited1_m8_5_S0 = [i for i in ii_all_limited1_m8_5_S0 if i not in ii_SB_limited1_m8_5_S0] ii_SAB_limited1_m8_5_S0 = [i for i in ii_all_limited1_m8_5_S0 if i in ii_barred and s4gdata.bar_strength[i] == 2] ii_nonSAB_limited1_m8_5_s0 = [i for i in ii_all_limited1_m8_5_S0 if i not in ii_SAB_limited1_m8_5_S0 and s4gdata.t_s4g[i]] # limited subsample 2: D < 30 Mpc -- 856 galaxies: 483 barred, 373 unbarred ii_all_limited2 = [i for i in ii_spirals if s4gdata.dist[i] <= 30] ii_barred_limited2 = [i for i in ii_all_limited2 if i in ii_barred] ii_unbarred_limited2 = [i for i in ii_all_limited2 if i not in ii_barred] ii_SB_limited2 = [i for i in ii_barred_limited2 if s4gdata.bar_strength[i] == 1] ii_nonSB_limited2 = [i for i in ii_all_limited2 if i not in ii_SB_limited2] ii_SAB_limited2 = [i for i in ii_barred_limited2 if s4gdata.bar_strength[i] == 2] ii_nonSAB_limited2 = [i for i in ii_all_limited2 if i not in ii_SB_limited2] # S0 only (74 S0s: 27 barred, 47 unbarred) ii_all_limited2_S0 = [i for i in range(nDisksTotal) if s4gdata.dist[i] <= 30 and s4gdata.t_s4g[i] <= -0.5] ii_barred_limited2_S0 = [i for i in ii_all_limited2_S0 if i in ii_barred] ii_unbarred_limited2_S0 = [i for i in ii_all_limited2_S0 if i not in ii_barred] ii_SB_limited2_S0 = [i for i in ii_SB_limited2 if s4gdata.t_s4g[i] <= -0.5] ii_nonSB_limited2_S0 = [i for i in ii_nonSB_limited2 if s4gdata.t_s4g[i] <= -0.5] ii_SAB_limited2_S0 = [i for i in ii_SAB_limited2 if s4gdata.t_s4g[i] <= -0.5] ii_nonSAB_limited2_S0 = [i for i in ii_nonSAB_limited2 if s4gdata.t_s4g[i] <= -0.5] # limited subsample 2m: D < 30 Mpc and log Mstar >= 9 -- 639 galaxies: 398 barred, 241 unbarred ii_all_limited2_m9 = [i for i in ii_all_limited2 if s4gdata.logmstar[i] >= 9] ii_barred_limited2_m9 = [i for i in ii_all_limited2_m9 if i in ii_barred] ii_unbarred_limited2_m9 = [i for i in ii_all_limited2_m9 if i not in ii_barred] ii_SB_limited2_m9 = [i for i in ii_all_limited2_m9 if i in ii_barred and s4gdata.bar_strength[i] == 1] ii_nonSB_limited2_m9 = [i for i in ii_all_limited2_m9 if i not in ii_SB_limited2_m9] ii_SAB_limited2_m9 = [i for i in ii_all_limited2_m9 if i in ii_barred and s4gdata.bar_strength[i] == 2] ii_nonSAB_limited2_m9 = [i for i in ii_all_limited2_m9 if i not in ii_SAB_limited2_m9] # galaxies with/without HyperLeda B-V colors ii_dist25 = [i for i in range(nDisksTotal) if s4gdata.dist[i] <= 25.0] ii_dist30 = [i for i in range(nDisksTotal) if s4gdata.dist[i] <= 30.0] ii_bmv_good = [i for i in range(nDisksTotal) if s4gdata.BmV_tc[i] > -2] ii_bmv_missing = [i for i in range(nDisksTotal) if s4gdata.BmV_tc[i] < -2] ii_d30_bmv_good = [i for i in ii_bmv_good if i in ii_dist30] ii_d30_bmv_missing = [i for i in ii_bmv_missing if i in ii_dist30] ii_d25_bmv_good = [i for i in ii_bmv_good if i in ii_dist25] ii_d25_bmv_missing = [i for i in ii_bmv_missing if i in ii_dist25]
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
e5b9253504e7c6ab22e304fbb17be281
Generate files for logistic regression with R This code will regenerate the input files for the logistic regression analysis in R (see R notebook s4gbars_R_logistic-regression.ipynb) By default, this will save the file in the data/ subdirectory, overwriting the pre-existing files. To change the destination, redefine dataDir.
# optionally redefine dataDir to save files in a different location # dataDir = XXX outf = open(dataDir+"barpresence_vs_logmstar_for_R.txt", 'w') outf.write("# Bar presence as function of log(M_star/M_sun) for D < 25 Mpc\n") outf.write("logmstar bar\n") for i in ii_all_limited1: logmstar = s4gdata.logmstar[i] if i in ii_barred_limited1: barFlag = 1 else: barFlag = 0 outf.write("%.3f %d\n" % (logmstar, barFlag)) outf.close() # restrict things to logMstar = 8.5--11 to avoid low-mass galaxies with crazy-high Vmax weights # and tiny number of galaxies with logMstar > 11 ff = "barpresence_vs_logmstar_for_R_w25_m8.5-11.txt" outf = open(dataDir+ff, 'w') outf.write("# Bar presence as function of log(M_star/M_sun) for D < 25 Mpc, with V_max weights\n") outf.write("logmstar weight bar\n") n_tot = 0 for i in ii_all_limited1: if s4gdata.logmstar[i] >= 8.5 and s4gdata.logmstar[i] <= 11: logmstar = s4gdata.logmstar[i] weight = s4gdata.w25[i] if i in ii_barred_limited1: barFlag = 1 else: barFlag = 0 outf.write("%.3f %.3f %d\n" % (logmstar, weight, barFlag)) n_tot += 1 outf.close() print("%s: %d galaxies" % (ff, n_tot)) # SB and SAB separately # restrict things to logMstar = 8.5--11 to avoid low-mass galaxies with crazy-high Vmax weights # and tiny number of galaxies with logMstar > 11 ff = "SBpresence_vs_logmstar_for_R_w25_m8.5-11.txt" outf = open(dataDir+ff, 'w') outf.write("# Bar presence as function of log(M_star/M_sun) for D < 25 Mpc, with V_max weights\n") outf.write("logmstar weight SB\n") n_tot = 0 for i in ii_all_limited1: if s4gdata.logmstar[i] >= 8.5 and s4gdata.logmstar[i] <= 11: logmstar = s4gdata.logmstar[i] weight = s4gdata.w25[i] if i in ii_SB_limited1: barFlag = 1 else: barFlag = 0 outf.write("%.3f %.3f %d\n" % (logmstar, weight, barFlag)) n_tot += 1 outf.close() print("%s: %d galaxies" % (ff, n_tot)) ff = "SABpresence_vs_logmstar_for_R_w25_m8.5-11.txt" outf = open(dataDir+ff, 'w') outf.write("# Bar presence as function of log(M_star/M_sun) for D < 25 Mpc, with V_max weights\n") outf.write("logmstar weight SAB\n") n_tot = 0 for i in ii_all_limited1: if s4gdata.logmstar[i] >= 8.5 and s4gdata.logmstar[i] <= 11: logmstar = s4gdata.logmstar[i] weight = s4gdata.w25[i] if i in ii_SAB_limited1: barFlag = 1 else: barFlag = 0 outf.write("%.3f %.3f %d\n" % (logmstar, weight, barFlag)) n_tot += 1 outf.close() print("%s: %d galaxies" % (ff, n_tot)) # restrict things to logMstar = 8.5--11 to avoid low-mass galaxies with crazy-high Vmax weights # and tiny number of galaxies with logMstar > 11 ff = "barpresence_vs_logmstar-Re_for_R_w25.txt" outf = open(dataDir+ff, 'w') outf.write("# Bar presence as function of log(M_star/M_sun) and log(R_e) for D < 25 Mpc, with V_max weights\n") outf.write("logmstar logRe weight bar\n") n_tot = 0 for i in ii_all_limited1: if s4gdata.logmstar[i] >= 8.5 and s4gdata.logmstar[i] <= 11 and s4gdata.Re_kpc[i] > 0: logmstar = s4gdata.logmstar[i] logRe = math.log10(s4gdata.Re_kpc[i]) weight = s4gdata.w25[i] if i in ii_barred_limited1: barFlag = 1 else: barFlag = 0 outf.write("%.3f %.3f %.3f %d\n" % (logmstar, logRe, weight, barFlag)) n_tot += 1 outf.close() print("%s: %d galaxies" % (ff, n_tot)) ff = "barpresence_vs_logmstar-logfgas_for_R_w25.txt" outf = open(dataDir+ff, 'w') outf.write("# Bar presence as function of log(M_star/M_sun) and log(f_gas) for D < 25 Mpc, with V_max weights\n") outf.write("logmstar logfgas weight bar\n") n_tot = 0 for i in ii_all_limited1: if s4gdata.logmstar[i] >= 8.5 and s4gdata.logmstar[i] <= 11 and s4gdata.logfgas[i] < 3: logmstar = s4gdata.logmstar[i] logfgas = s4gdata.logfgas[i] weight = s4gdata.w25[i] if i in ii_barred_limited1: barFlag = 1 else: barFlag = 0 outf.write("%.3f %.3f %.3f %d\n" % (logmstar, logfgas, weight, barFlag)) n_tot += 1 outf.close() print("%s: %d galaxies" % (ff, n_tot)) ww25 = s4gdata.weight_BmVtc * s4gdata.w25 ff = "barpresence_vs_logmstar-gmr_for_R_w25.txt" outf = open(dataDir+ff, 'w') outf.write("# Bar presence as function of g-r for D < 25 Mpc and logMstar > 8.5, with B-V and V_max weights\n") outf.write("logmstar gmr weight bar\n") n_tot = 0 for i in ii_all_limited1_m8_5: if s4gdata.gmr_tc[i] >= -1: logmstar = s4gdata.logmstar[i] gmr = s4gdata.gmr_tc[i] weight = ww25[i] if i in ii_barred_limited1: barFlag = 1 else: barFlag = 0 outf.write("%.3f %.3f %.3f %d\n" % (logmstar, gmr, weight, barFlag)) n_tot += 1 outf.close() print("%s: %d galaxies" % (ff, n_tot))
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
12b7befb0b012f42de13280ab3841b69
Figures Figure 1 Left panel: Distances of galaxies in S4G Parent Disk Sample vs stellar mass
plt.plot(s4gdata.dist, s4gdata.logmstar, 'ko', mfc='None', mec='k',ms=4) plt.plot(s4gdata.dist[ii_barred], s4gdata.logmstar[ii_barred], 'ko',ms=3.5) plt.axvline(25) plt.axvline(30, ls='--') plt.axhline(8.5) plt.axhline(9, ls='--') xlim(0,60) plt.xlabel("Distance [Mpc]"); plt.ylabel(xtmstar) if savePlots: plt.savefig(plotDir+"logMstar-vs-distance.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
44518c7ccd609d8edae67584d6fc1149
Right panel: $R_{25}$ vs distance for S4G spirals
# define extra subsample for plot: all spirals with log(M_star) >= 9 ii_logmstar9 = [i for i in ii_spirals if s4gdata.logmstar[i] >= 9] plot(s4gdata.dist[ii_spirals], s4gdata.R25_kpc[ii_spirals], 'o', mfc='None', mec='0.25',ms=4) plot(s4gdata.dist[ii_logmstar9], s4gdata.R25_kpc[ii_logmstar9], 'cD', mec='k', ms=4) xlim(0,60) xlabel("Distance [Mpc]"); ylabel(ytR25_kpc) # push bottom of plot upwards so that x-axis label isn't clipped in PDF output plt.subplots_adjust(bottom=0.14) if savePlots: plt.savefig(plotDir+"R25-vs-distance.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
57fbca4abf144db2c0ba172d5e8ffe55
Figure 2 Left panel: $g - r$ vs stellar mass
# define extra subsamples for plot: galaxies with valid B-V_tc values; subsets at different distances ii_bmv_good = [i for i in range(nDisksTotal) if s4gdata.BmV_tc[i] > -2] iii25 = [i for i in ii_bmv_good if s4gdata.dist[i] <= 25] iii25to30 = [i for i in ii_bmv_good if s4gdata.dist[i] > 25 and s4gdata.dist[i] <= 30] iii_larger = [i for i in ii_bmv_good if s4gdata.dist[i] > 30] plot(s4gdata.logmstar[iii_larger], s4gdata.gmr_tc[iii_larger], 's', mec='0.25', mfc='None', ms=4) plot(s4gdata.logmstar[iii25to30], s4gdata.gmr_tc[iii25to30], 'mD', ms=4) plot(s4gdata.logmstar[iii25], s4gdata.gmr_tc[iii25], 'ko', ms=5) xlabel(xtmstar); ylabel(xtgmr) xlim(7,11.5) plt.subplots_adjust(bottom=0.14) if savePlots: plt.savefig(plotDir+"gmr-vs-logmstar.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
f6733e80223e08521dea78d72e5aea16
Right panel: Gas mass ratio $f_{\rm gas}$ vs stellar mass
# define extra subsamples for plot: galaxies with valid H_I meassurements; subsets at different distances iii25 = [i for i in ii_spirals if s4gdata.M_HI[i] < 1.0e40 and s4gdata.dist[i] <= 25] iii25to30 = [i for i in ii_spirals if s4gdata.M_HI[i] < 1.0e40 and s4gdata.dist[i] > 25 and s4gdata.dist[i] <= 30] iii_larger = [i for i in ii_spirals if s4gdata.M_HI[i] < 1.0e40 and s4gdata.dist[i] > 30] plot(s4gdata.logmstar[iii_larger], s4gdata.logfgas[iii_larger], 's', mec='0.25', mfc='None', ms=4) plot(s4gdata.logmstar[iii25to30], s4gdata.logfgas[iii25to30], 'mD', ms=4) plot(s4gdata.logmstar[iii25], s4gdata.logfgas[iii25], 'ko', ms=5) xlabel(xtmstar); ylabel(xtfgas) xlim(7,11.5) plt.subplots_adjust(bottom=0.14) if savePlots: plt.savefig(plotDir+"logfgas-vs-logmstar.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
c1bd1e4c226d34b28d552b802503bf70
Figure 4: Histogram of stellar masses in different subsamples
hist(s4gdata.logmstar, bins=np.arange(7,12,0.5), color='1.0', label="All", edgecolor='k') hist(s4gdata.logmstar[ii_all_limited2], bins=np.arange(7,12,0.5), color='0.9', edgecolor='k', label=r"$D < 30$ Mpc") hist(s4gdata.logmstar[ii_all_limited1], bins=np.arange(7,12,0.5), color='g', edgecolor='k', label=r"$D < 25$ Mpc") xlabel(xtmstar);ylabel("N") legend(fontsize=9, loc='upper left', framealpha=0.5) if savePlots: savefig(plotDir+"logmstar_hist.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
3bd2ef5853912d78900302af72bf587a
Figure 5: Bar fraction as function of stellar mass, color, gas mass fraction The code here is for the six individual panels of the figure Upper left panel: Bar frequency vs stellar mass
# load Barazza+2008 bar frequencies logmstar_b08,fbar_b08,fbar_e_low_b08,fbar_e_high_b08 = GetBarazzaData(fbarLitDir+"fbar-vs-logmstar_barazza+2008.txt") # load other SDSS-based bar frequencies logmstar_na10,fbar_na10 = s4gutils.Read2ColumnProfile(fbarLitDir+"fbar-vs-logMstar_nair-abraham2010.txt") logmstar_m12,fbar_m12 = s4gutils.Read2ColumnProfile(fbarLitDir+"fbar-vs-logmstar_masters+2012.txt") logmstar_m14,fbar_m14 = s4gutils.Read2ColumnProfile(fbarLitDir+"fbar-vs-logmstar_melvin+2014.txt") logmstar_g15,fbar_g15 = s4gutils.Read2ColumnProfile(fbarLitDir+"fbar-vs-logmstar_gavazzi+2015.txt") # quadratic logistic fit (using weights) -- see R notebook s4gbars_R_logistic-regression.ipynb # for determination of parameters logistic_params = [-82.2446, 17.1052, -0.8801] mm = np.arange(8.0,11.51,0.01) logistic_fit2w = logistic_polyn(mm, logistic_params) # plot SDSS-based bar frequencies plt.plot(logmstar_na10, fbar_na10, '*', mfc="None",mec='c', ms=7,label='N&A 2010') plt.plot(logmstar_m12, fbar_m12, 'D', mfc="None",mec='k', ms=7,label='Masters+2012') plt.plot(logmstar_m14, fbar_m14, 's', mfc="0.75",mec='k', ms=5,label='Melvin+2014') plt.plot(logmstar_g15, fbar_g15, '*', color='m', alpha=0.5, ms=7,label='Gavazzi+2015') # plot S4G bar frequencies and quadratic logistic fit pu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w25, ii_barred_limited1, ii_unbarred_limited1, 8.0, 11.3,0.25, fmt='ro', mec='k', ms=9, noErase=True, label=ss1_bold) plt.plot(mm, logistic_fit2w, 'r--', lw=1.5, label=s4g_txt_bold + " logistic fit") plt.errorbar(logmstar_b08, fbar_b08, yerr=[fbar_e_low_b08,fbar_e_high_b08], fmt='bD',alpha=0.5, label='Barazza+2008') plt.ylim(0,1) plt.xlabel(xtmstar); plt.ylabel('Bar fraction') # add weighted counts for S4G data binranges = np.arange(8.0, 11.3,0.25) i_all = ii_barred_limited1 + ii_unbarred_limited1 (n_all, bin_edges) = np.histogram(s4gdata.logmstar[i_all], binranges) n_all_int = [round(n) for n in n_all] for i in range(len(n_all_int)): x = binranges[i] n = n_all_int[i] text(x + 0.07, 0.025, "%3d" % n, fontsize=11.5, color='r') # re-order labels in legend ax = plt.gca() handles,labels = ax.get_legend_handles_labels() print(labels) handles = [handles[5], handles[4], handles[6], handles[1], handles[2], handles[3], handles[0]] labels = [labels[5], labels[4], labels[6], labels[1], labels[2], labels[3], labels[0]] legend(handles, labels, loc="upper left", fontsize=10, ncol=4, framealpha=0.5) # push bottom of plot upwards so that x-axis label isn't clipped in PDF output plt.subplots_adjust(bottom=0.14) if savePlots: plt.savefig(plotDir+"fbar-vs-logmstar.pdf") print(labels)
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
03b715bbba1af1d3c8ce4f76191d8ef3
Upper right panel: SB and SAB frequencies vs stellar mass
pu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w25, ii_SB_limited1, ii_nonSB_limited1, 8.0, 11.3, 0.25, fmt='ko',ms=8, label=r'SB (S$^{4}$G: $D \leq 25$ Mpc)') pu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w25, ii_SAB_limited1, ii_nonSAB_limited1, 8.0, 11.3, 0.25, offset=0.03, fmt='co', mec='k', ms=8, noErase=True, label=r'SAB (S$^{4}$G: $D \leq 25$ Mpc)') plt.ylim(0,1) plt.xlabel(xtmstar); plt.ylabel('Bar fraction') legend(fontsize=10, loc='upper left', framealpha=0.5) # push bottom of plot upwards so that x-axis label isn't clipped in PDF output plt.subplots_adjust(bottom=0.14) if savePlots: plt.savefig(plotDir+"fSB-fSAB-vs-logmstar.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
d17c2b942a215a82e62a3f28397b4cba
Middle left panel: Bar frequency vs color
gmr_b08,fbar_b08,fbar_e_low_b08,fbar_e_high_b08 = GetBarazzaData(fbarLitDir+"fbar-vs-gmr_barazza+2008.txt") gmr_na10,fbar_na10 = s4gutils.Read2ColumnProfile(fbarLitDir+"fbar-vs-gmr_nair-abraham2010.txt") gmr_m11,fbar_m11 = s4gutils.Read2ColumnProfile(fbarLitDir+"fbar-vs-gmr_masters+2011.txt") gmr_m12,fbar_m12 = s4gutils.Read2ColumnProfile(fbarLitDir+"fbar-vs-gmr_masters+2012.txt") gmr_lee12,fbar_lee12 = s4gutils.Read2ColumnProfile(fbarLitDir+"fbar-vs-gmr_lee+2012.txt") # calculate weights: product of color and V/V_max weights ww25 = s4gdata.weight_BmVtc * s4gdata.w25 ww30 = s4gdata.weight_BmVtc * s4gdata.w30 plt.plot(gmr_na10, fbar_na10, '*', color='c', mec='k', alpha=0.5, ms=7, label='N&A 2010') plt.plot(gmr_m11, fbar_m11, 's', color='0.7', mec='k', label='Masters+2011') plt.plot(gmr_m12, fbar_m12, 'D', mfc="None", mec='k', ms=7, label='Masters+2012') plt.plot(gmr_lee12, fbar_lee12, 'v', mfc="0.9", mec='k', ms=7, label='Lee+2012') pu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww25, ii_barred_limited1_m8_5, ii_unbarred_limited1_m8_5, -0.2,1.0,0.1, noErase=True, fmt='ro', mec='k', ms=9, label=ss1m_bold) plt.errorbar(gmr_b08, fbar_b08, yerr=[fbar_e_low_b08,fbar_e_high_b08], fmt='bD',alpha=0.5, label='Barazza+2008') # linear logistic regression for S4G galaxies gmrvect = np.arange(0,1.1, 0.1) plot(gmrvect, logistic_lin(gmrvect, 0.4544, -0.4394), 'r--', lw=1.5, label=s4g_txt_bold + " logistic fit") plt.xlabel(xtgmr); plt.ylabel('Bar fraction') xlim(0,1);ylim(0,1) # add weighted counts for S4G data binranges = np.arange(-0.2,1.0,0.1) i_all = ii_barred_limited1_m8_5 + ii_unbarred_limited1_m8_5 (n_all, bin_edges) = np.histogram(s4gdata.gmr_tc[i_all], binranges) n_all_int = [round(n) for n in n_all] for i in range(2, len(n_all_int)): x = binranges[i] n = n_all_int[i] text(x + 0.035, 0.025, "%3d" % n, fontsize=11.5, color='r') # re-order labels in legend ax = plt.gca() handles,labels = ax.get_legend_handles_labels() print(labels) handles = [handles[5], handles[4], handles[6], handles[0], handles[1], handles[2], handles[3]] labels = [labels[5], labels[4], labels[6], labels[0], labels[1], labels[2], labels[3]] legend(handles, labels, loc="upper left", fontsize=10, ncol=3, framealpha=0.5) # push bottom of plot upwards so that x-axis label isn't clipped in PDF output plt.subplots_adjust(bottom=0.14) if savePlots: plt.savefig(plotDir+"fbar-vs-gmr_corrected_all.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
16b25b40b081f26290c1c7e9ccd70977
Middle right panel: SB and SAB frequencies vs color
pu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww25, ii_SB_limited1_m8_5, ii_nonSB_limited1_m8_5, -0.2,1,0.1, fmt='ko', ms=8, label="SB ("+ss1m+")") pu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww25, ii_SAB_limited1_m8_5, ii_nonSAB_limited1_m8_5, -0.2,1,0.1, fmt='co', mec='k', ms=8, noErase=True, label="SAB ("+ss1m+")") plt.ylim(0,1) plt.xlabel(xtgmr); plt.ylabel('Bar fraction') legend(loc="upper left", fontsize=10, framealpha=0.5) # push bottom of plot upwards so that x-axis label isn't clipped in PDF output plt.subplots_adjust(bottom=0.14) if savePlots: plt.savefig(plotDir+"fSB-fSAB-vs-gmr_corrected.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
e686b2ebf703323377c6b473558c8609
Lower left panel: Bar frequency vs gas mass ratio
logfgas_m12,fbar_m12 = s4gutils.Read2ColumnProfile(fbarLitDir+"fbar-vs-logfgas_masters+2012.txt") logfgas_cs17_raw,fbar_cs17 = s4gutils.Read2ColumnProfile(fbarLitDir+"fbar-vs-logfgas_cervantes_sodi2017.txt") # correct CS17 values from log M_{HI + He}/M_{star} to log M_{HI}/M_{star} logfgas_cs17 = logfgas_cs17_raw - 0.146 plt.clf();pu.PlotFrequencyWithWeights(s4gdata.logfgas, s4gdata.w25, ii_barred_limited1_m8_5, ii_unbarred_limited1_m8_5, -3,2,0.5, fmt='ro', mec='k', ms=9, label=ss1m_bold) plt.plot(logfgas_m12, fbar_m12, 'D', mfc="None",mec='k', ms=7,label='Masters+2012') plt.plot(logfgas_cs17, fbar_cs17, '*', color='0.75', mec='k', ms=8,label='Cervantes Sodi 2017') # linear logistic regression for S4G galaxies fgasvect = np.arange(-3, 1.01, 0.01) plot(fgasvect, logistic_lin(fgasvect, 0.42456, 0.03684), 'r--', lw=1.5, label=s4g_txt_bold + " logistic fit") plt.xlabel(xtfgas);plt.ylabel('Bar fraction') plt.ylim(0,1);plt.xlim(-3,1) # add weighted counts for S4G data binranges = np.arange(-3,2,0.5) i_all = ii_barred_limited1_m8_5 + ii_unbarred_limited1_m8_5 (n_all, bin_edges) = np.histogram(s4gdata.logfgas[i_all], binranges) n_all_int = [round(n) for n in n_all] for i in range(0, len(n_all_int) - 1): x = binranges[i] n = n_all_int[i] text(x + 0.2, 0.025, "%3d" % n, fontsize=11.5, color='r') # re-order labels in legend ax = plt.gca() handles,labels = ax.get_legend_handles_labels() print(labels) handles = [handles[3], handles[2], handles[0], handles[1]] labels = [labels[3], labels[2], labels[0], labels[1]] legend(handles, labels, loc="upper right", fontsize=10, ncol=2, framealpha=0.5) # push bottom of plot upwards so that x-axis label isn't clipped in PDF output plt.subplots_adjust(bottom=0.14) if savePlots: savefig(plotDir+"fbar-vs-fgas.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
ceeb8ca4fcc64f430cdd7d09d84695fd
Lower right panel: SB and SAB frequencies vs gas mass ratio
pu.PlotFrequencyWithWeights(s4gdata.logfgas, s4gdata.w25, ii_SB_limited1_m8_5, ii_nonSB_limited1_m8_5, -3,1.5,0.5, fmt='ko', ms=8, label="SB ("+ss1m+")") pu.PlotFrequencyWithWeights(s4gdata.logfgas, s4gdata.w25, ii_SAB_limited1_m8_5, ii_nonSAB_limited1_m8_5, -3,1.5,0.5, fmt='co', mec='k', ms=8, noErase=True, label="SAB ("+ss1m+")") plt.legend(loc='upper left',fontsize=10, framealpha=0.5) plt.ylim(0,1);xlim(-3,1) plt.xlabel(xtfgas); plt.ylabel('Bar fraction') # push bottom of plot upwards so that x-axis label isn't clipped in PDF output plt.subplots_adjust(bottom=0.14) if savePlots: plt.savefig(plotDir+"fSB-fSAB-vs-fgas.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
b3a996f7b35673f33f4157c42953edac
Figure A1 We generate an interpolating spline using an edited version of the actual binned f(B_tc) values -- basically, we ensure that the spline interpolation goes smoothly to 0 for faint magnitudes and smoothly to 1 for bright magnitudes.
# generate Akima spline interpolation for f(B-V) as function of B_tc x_Btc = [7.0, 8.25, 8.75, 9.25, 9.75, 10.25, 10.75, 11.25, 11.75, 12.25, 12.75, 13.25, 13.75, 14.25, 14.75, 15.25, 15.75, 16.25] y_fBmV = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.9722222222222222, 0.8840579710144928, 0.8125, 0.6222222222222222, 0.5632183908045977, 0.4074074074074074, 0.2727272727272727, 0.3442622950819672, 0.2978723404255319, 0.10714285714285714, 0.01, 0.0] fBmV_akimaspline = scipy.interpolate.Akima1DInterpolator(x_Btc, y_fBmV) xx = np.arange(7,17,0.1) pu.PlotFrequency(s4gdata.B_tc, ii_d30_bmv_good, ii_d30_bmv_missing, 7,16.5,0.5, fmt='ko', label=ss1) pu.PlotFrequency(s4gdata.B_tc, ii_d25_bmv_good, ii_d25_bmv_missing, 7,16.5,0.5, fmt='ro', label=ss2, noErase=True) plot(xx, fBmV_akimaspline(xx), color='k', ls='--') xlim(16.5,7); ylim(0,1) xlabel(xtmB); ylabel(r"Fraction of galaxies with $(B - V)_{\rm tc}$") legend(fontsize=10,loc='upper left', framealpha=0.5) if savePlots: savefig(plotDir+"f_bmv-vs-btc-with-spline.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
624c908354360846d350220b347d6f17
Figure A2 Left panel
pu.PlotFrequencyWithWeights(s4gdata.BmV_tc, s4gdata.weight_BmVtc, ii_barred_limited2_m9, ii_unbarred_limited2_m9, 0,1,0.1, fmt='ko', ms=9, label=ss2m); pu.PlotFrequencyWithWeights(s4gdata.BmV_tc, s4gdata.weight_BmVtc, ii_barred_limited1_m8_5, ii_unbarred_limited1_m8_5, 0,1,0.1, offset=0.01, fmt='ro', ms=9, noErase=True, label=ss1m) plt.xlabel(xtBmV_tc);plt.ylabel('Bar fraction') plt.ylim(0,1) plt.legend(fontsize=10, framealpha=0.5) # push bottom of plot upwards so that x-axis label isn't clipped in PDF output plt.subplots_adjust(bottom=0.14) if savePlots: plt.savefig(plotDir+"fbar-vs-BmV_corrected.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
b678fcb18504b3020429c7101e85ed98
Right panel
ww25 = s4gdata.weight_BmVtc * s4gdata.w25 ww30 = s4gdata.weight_BmVtc * s4gdata.w30 pu.PlotFrequencyWithWeights(s4gdata.BmV_tc, ww25, ii_SB_limited1_m8_5, ii_nonSB_limited1_m8_5, -0.2,1,0.1, fmt='ko', ms=8, label="SB ("+ss1+")") pu.PlotFrequencyWithWeights(s4gdata.BmV_tc, ww30, ii_SAB_limited1_m8_5, ii_nonSAB_limited1_m8_5, -0.2,1,0.1, fmt='co', ms=8, noErase=True, label="SAB ("+ss1+")") plt.ylim(0,1) plt.xlabel(xtBmV_tc) plt.ylabel('Bar fraction') legend(loc="upper right", fontsize=10, framealpha=0.5) # push bottom of plot upwards so that x-axis label isn't clipped in PDF output plt.subplots_adjust(bottom=0.14) if savePlots: plt.savefig(plotDir+"fSB-fSAB-vs-BmV_corrected.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
dc6438f82ba050e6f61adb21b23f8729
Figure B1 Upper left panel
# load Diaz-Garcia+2016a fractions logmstar_dg16,fbar_dg16 = s4gutils.Read2ColumnProfile(fbarLitDir+"fbar-vs-logMstar_diaz-garcia+2016a.txt") pu.PlotFrequency(s4gdata.logmstar, ii_barred_limited1, ii_unbarred_limited1, 8.0, 11.3, 0.25, fmt='ro', ms=9, label=ss1) pu.PlotFrequency(s4gdata.logmstar, ii_barred_limited2, ii_unbarred_limited2, 8.0, 11.3, 0.25, offset=0.02, fmt='ro', mfc='None', mew=1, mec='r', ms=8,noErase=True, label=ss2) plt.plot(logmstar_dg16,fbar_dg16, 's', mfc="0.75",mec='k', ms=7,label='Díaz-García+2016a') plt.ylim(0,1) plt.xlabel(xtmstar) plt.ylabel('Bar fraction') legend(loc="upper left", fontsize=10, framealpha=0.5) # push bottom of plot upwards so that x-axis label isn't clipped in PDF output plt.subplots_adjust(bottom=0.14) if savePlots: plt.savefig(plotDir+"fbar-vs-logmstar_2sample.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
e7b6b38f0117cc8888d629a816dc71b5
Upper right panel
pu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w25, ii_SB_limited1, ii_nonSB_limited1, 8.0, 11.3, 0.25, fmt='ko', ms=8, label=r'SB (S$^{4}$G: $D \leq 25$ Mpc)') pu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w30, ii_SB_limited2, ii_nonSB_limited2, 8.0, 11.3, 0.25, noErase=True, ms=8, fmt='ko', mfc='None', offset=0.02, label=r'SB (S$^{4}$G: $D \leq 30$ Mpc)') pu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w25, ii_SAB_limited1, ii_nonSAB_limited1, 8.0, 11.3, 0.25, noErase=True, ms=8, fmt='co', label=r'SAB (S$^{4}$G: $D \leq 25$ Mpc)') pu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w30, ii_SAB_limited2, ii_nonSAB_limited2, 8.0, 11.3, 0.25, noErase=True, ms=8, fmt='co', mfc='None', mec='c', offset=0.02, label=r'SAB (S$^{4}$G: $D \leq 30$ Mpc)') #pu.PlotFrequency(s4gdata.logmstar, ii_barred_limited1, ii_unbarred_limited1, 8.0, 11.3, 0.25, fmt='ro', ms=9, label=ss2) #pu.PlotFrequency(s4gdata.logmstar, ii_barred_limited2, ii_unbarred_limited2, 8.0, 11.3, 0.25, offset=0.02, fmt='ro', mfc='None', mew=1, mec='r', ms=8,noErase=True, label=ss1) plt.ylim(0,1) plt.xlabel(xtmstar) plt.ylabel('Bar fraction') legend(loc="upper left", ncol=2, fontsize=10, framealpha=0.5) # push bottom of plot upwards so that x-axis label isn't clipped in PDF output plt.subplots_adjust(bottom=0.14) if savePlots: plt.savefig(plotDir+"fSB-fSAB-vs-logmstar_2sample.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
fd18968b762985a2a6caac4c0f8a0617
Left middle panel
pu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww25, ii_barred_limited1_m8_5, ii_unbarred_limited1_m8_5, -0.2,1.0,0.1, fmt='ro', ms=9, label=ss1m) pu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww30, ii_barred_limited2_m9, ii_unbarred_limited2_m9, -0.2,1.0,0.1, offset=0.01, fmt='ro', mfc='None', mew=1, mec='r', ms=8, noErase=True, label=ss2m) plt.xlabel(xtgmr) plt.ylabel('Bar fraction') xlim(0,1);ylim(0,1) legend(loc="upper left", fontsize=9, framealpha=0.5) plt.subplots_adjust(bottom=0.14) # push bottom of plot upwards so that x-axis label isn't clipped in PDF output plt.subplots_adjust(bottom=0.14) if savePlots: plt.savefig(plotDir+"fbar-vs-gmr_corrected_2sample.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
df9c439edbac799f94d7e5d40230450e
Right middle panel
pu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww25, ii_SB_limited1_m8_5, ii_nonSB_limited1_m8_5, 0,1,0.1, fmt='ko', ms=8, label="SB ("+ss1m+")") pu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww30, ii_SB_limited2_m9, ii_nonSB_limited2_m9, 0,1,0.1, noErase=True, ms=8, fmt='ko', mfc='None', offset=0.01, label="SB ("+ss2m+")") pu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww25, ii_SAB_limited1_m8_5, ii_nonSAB_limited1_m8_5, 0,1,0.1, noErase=True, fmt='co', ms=8, label="SAB ("+ss1m+")") pu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww30, ii_SAB_limited2_m9, ii_nonSAB_limited2_m9, 0,1,0.1, noErase=True, ms=8, fmt='co', mfc='None', mew=1, mec='c', offset=0.01, label="SAB ("+ss2m+")") plt.ylim(0,1) plt.xlabel(xtgmr) plt.ylabel('Bar fraction') legend(loc="upper left", fontsize=10, framealpha=0.5) # push bottom of plot upwards so that x-axis label isn't clipped in PDF output plt.subplots_adjust(bottom=0.14) if savePlots: plt.savefig(plotDir+"fSB-fSAB-vs-gmr_corrected_2sample.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
85fc06efbff7f9a0a945e6ede32b8db4