markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
hash
stringlengths
32
32
In such a case, the condition evaluates to False and the print call included in the indented statement is simply skipped. However, showing (or rather creating) no output at all is not always desirable, and in the majority of use cases, there will definitely be a pool of two or more possibilities that need to be taken into consideration. In order to overcome such limitations of simple if statements, we could insert an else clause followed by a code body that should be executed once the initial condition evaluates to False ('else code' in the above figure) in order for our code to behave more informative.
if n > 0: print("Larger than zero.") else: print("Smaller than or equal to zero.")
docs/mpg-if_error_continue/examples/e-02-2_conditionals.ipynb
marburg-open-courseware/gmoc
mit
e11f3432f2de61cc0e80683c584c664e
At this point, be aware that the lines including if and else are not indented, whereas the related code bodies are. Due to the black-and-white nature of such an if-else statement, exactly one out of two possible blocks is executed. Starting from the top, the if statement is evaluated and returns False (because n is not larger than zero), hence the indented 'if code' underneath is skipped. Our code now jumps directly to the next statement that is not indented and evaluates the else statement included therein. Since else means: "Perform the following operation if all previous conditional statements (plural!) failed", which evaluates to True in our case, the subsequent print operation is executed. Now, what if we wanted to know if a value is larger than, smaller than, or equal to zero, i.e. add another layer of information to our initial condition "Is a value larger than zero or not?". In order to solve this, elif (short for 'else if' in other languages) is the right way to go as it lets you insert an arbitrary number of additional conditions between if and else that go beyond the rather basic capabilities of else.
if n > 0: print("Larger than zero.") elif n < 0: print("Smaller than zero.") else: print("Exactly zero.")
docs/mpg-if_error_continue/examples/e-02-2_conditionals.ipynb
marburg-open-courseware/gmoc
mit
22ba017d279c3842973bfe12f9993fcb
And similarly,
p = 0 if p > 0: print("Larger than zero.") elif p < 0: print("Smaller than zero.") else: print("Exactly zero.")
docs/mpg-if_error_continue/examples/e-02-2_conditionals.ipynb
marburg-open-courseware/gmoc
mit
c59b243ff0f1106a128e11b6c4f320b3
Of course, indented blocks can have more than one statement, i.e. consist of multiple indented lines of code. In addition, they can embrace, or be embraced by, for or while loops. For example, if we wanted to count all the non-negative entries in a list, the following code snippet would be a proper solution that relies on both of the aforementioned features.
x = [0, 3, -6, -2, 7, 1, -4] ## set a counter n = 0 for i in range(len(x)): # if a non-negative integer is found, increment the counter by 1 if x[i] >= 0: print("The value at position", i, "is larger than or equal to zero.") n += 1 # else do not increment the counter else: print("The value at position", i, "is smaller than zero.") if i == (len(x)-1): print("\n") print(n, "out of", len(x), "elements are larger than or equal to zero.")
docs/mpg-if_error_continue/examples/e-02-2_conditionals.ipynb
marburg-open-courseware/gmoc
mit
0ea283a4baf9d33c1bfacad9c68555c6
<hr> Brief digression: continue and break There are (at least) two key words that allow for an even finer control of what happens inside a for loop, viz. continue and break. As the name implies, continue moves directly on to the next iteration of a loop without executing the remaining code body.
for i in range(5): if i in [1, 3]: continue print(i)
docs/mpg-if_error_continue/examples/e-02-2_conditionals.ipynb
marburg-open-courseware/gmoc
mit
395b00799cd89e35fed1642b9f343420
break, on the other hand, breaks out of the innermost loop. Here, (i) the remaining code body following the break statement in the current iteration, but also (ii) any outstanding iterations are not executed anymore.
for i in range(5): if i == 2: break print(i)
docs/mpg-if_error_continue/examples/e-02-2_conditionals.ipynb
marburg-open-courseware/gmoc
mit
d41a22ed158449ab885b04a9bcfafe11
Breadth-First Search Breadt-first search (BFS) is algorithm that can find the closest members in a graph that match a certain search criterion. BFS requires that we model our problem as a graph (nodes connected through edges). BFS can be applied to directed and undirected graph, where it can be applied to answer to types of question: Is there are connection between a particular pair of nodes? Which is the closest node to a given node that satisfies a certain criterion? To answer these questions, BFS starts by checking all direct neighbors of a given node -- neighbors are nodes that have a direct connection to a particular node. Then, if none of those neighbors satisfies the criterion that we are looking for, the search is expanded to the neighbors of the nodes we just checked, and so on, until a match is found or all nodes in the graph were checked. To keep track of the nodes that we have already checked and that we are going to check, we need two additional data structures: 1) A hash table to keep track of nodes we have already checked. If we don't check for previously checked nodes, we may end up in cycles depending on the structure of the graph. 2) A queue that stores the items to be checked. Representing the graph To represent the graph, its nodes and edges, we can simply use a hash table such as Python's built-in dictionaries. Imagine we have an undirected, social network graph that lists our direct friends (Elijah, Marissa, Nikolai) and friends of friends: <img src="images/breadth-first-search/friend-graph-1.jpg" alt="" style="width: 400px;"/> Say we are going to move to a new apartment next weekend, and we want to ask our friends if they have a pick-up truck that can be helpful in this endeavor. First, we would reach out to our directed friends (or 1st degree connections). If none of these have a pick-up truck, we ask them to ask their 1st degree connections (which are our 2nd degree connections), and so forth: <img src="images/breadth-first-search/friend-graph-2.jpg" alt="" style="width: 600px;"/> We can represent such a graph using a simple hash table (here: Python dictionary) as follows:
graph = {} graph['You'] = ['Elijah', 'Marissa', 'Nikolai', 'Cassidy'] graph['Elijah'] = ['You'] graph['Marissa'] = ['You'] graph['Nikolai'] = ['John', 'Thomas', 'You'] graph['Cassidy'] = ['John', 'You'] graph['John'] = ['Cassidy', 'Nikolai'] graph['Thomas'] = ['Nikolai', 'Mario'] graph['Mario'] = ['Thomas']
ipython_nbs/search/breadth-first-search.ipynb
rasbt/algorithms_in_ipython_notebooks
gpl-3.0
214cae5de1a857c9a4694ddec497f079
The Queue data structure Next, let's setup a simple queue data structure. Of course, we can also use a regular Python list like a queue (using .insert(0, x) and .pop(), but this way, our breadth-first search implementation is maybe more illustrative. For more information about queues, please see the Queues and Deques notebook.
class QueueItem(): def __init__(self, value, pointer=None): self.value = value self.pointer = pointer class Queue(): def __init__(self): self.last = None self.first = None self.length = 0 def enqueue(self, value): item = QueueItem(value, None) if self.last: self.last.pointer = item if not self.first: self.first = item self.last = item self.length += 1 def dequeue(self): if self.first is not None: value = self.first.value self.first = self.first.pointer self.length -= 1 else: value = None return value qe = Queue() qe.enqueue('a') print('First element:', qe.first.value) print('Last element:', qe.last.value) print('Queue length:', qe.length) qe.enqueue('b') print('First element:', qe.first.value) print('Last element:', qe.last.value) print('Queue length:', qe.length) qe.enqueue('c') print('First element:', qe.first.value) print('Last element:', qe.last.value) print('Queue length:', qe.length) val = qe.dequeue() print('Dequeued value:', val) print('Queue length:', qe.length) val = qe.dequeue() print('Dequeued value:', val) print('Queue length:', qe.length) val = qe.dequeue() print('Dequeued value:', val) print('Queue length:', qe.length) val = qe.dequeue() print('Dequeued value:', val) print('Queue length:', qe.length) qe.enqueue('c') print('First element:', qe.first.value) print('Last element:', qe.last.value) print('Queue length:', qe.length)
ipython_nbs/search/breadth-first-search.ipynb
rasbt/algorithms_in_ipython_notebooks
gpl-3.0
c216cea08581ce770f7b1030ac7c139e
Implementing freadth-first search to find the shortest path Now, back to the graph, where we want to identify the closest connection that owns a truck, which can be helpful for moving (if we are allowed to borrow it, that is): <img src="images/breadth-first-search/friend-graph-2.jpg" alt="" style="width: 600px;"/>
graph = {} graph['You'] = ['Elijah', 'Marissa', 'Nikolai', 'Cassidy'] graph['Elijah'] = ['You'] graph['Marissa'] = ['You'] graph['Nikolai'] = ['John', 'Thomas', 'You'] graph['Cassidy'] = ['John', 'You'] graph['John'] = ['Cassidy', 'Nikolai'] graph['Thomas'] = ['Nikolai', 'Mario'] graph['Mario'] = ['Thomas']
ipython_nbs/search/breadth-first-search.ipynb
rasbt/algorithms_in_ipython_notebooks
gpl-3.0
d347ec946cedf63d2851223e64e938e0
For simplicity, let's assume we have function that checks if a person ows a pick-up truck. (Say, Mario owns a pick-up truck, the check function knows it but we don't know it.)
def has_truck(person): if person == 'Mario': return True else: return False
ipython_nbs/search/breadth-first-search.ipynb
rasbt/algorithms_in_ipython_notebooks
gpl-3.0
0337c01dcc32229f302db57c7c8714c6
Now, the breadth_first_search implementation below will check our closest neighbors first, then, it will check the neighnors of our neighbors and so forth. We will make use both of the graph we constructed and the Queue data structure that we implemented. Also, note that we are keeping track of people we already checked to prevent cycles in our search:
def breadth_first_search(graph): # initialize queue queue = Queue() for person in graph['You']: queue.enqueue(person) people_checked = set() degree = 0 while queue.length: person = queue.dequeue() if has_truck(person): return person else: degree += 1 people_checked.add(person) for next_person in graph[person]: # check to prevent endless cycles if next_person not in people_checked: queue.enqueue(next_person) breadth_first_search(graph)
ipython_nbs/search/breadth-first-search.ipynb
rasbt/algorithms_in_ipython_notebooks
gpl-3.0
ef0d5bc5da3e57f9bea56c7a987f970e
Length range Print out the gene names for all genes between 90 and 110 bases long.
import csv with open('data.csv') as csvfile: raw_data = csv.reader(csvfile) for row in raw_data: if len(row[1]) >= 90 or len(row[1]) <= 110: print(row[2])
Week_06/Week06 - 01 - Homework Solutions.ipynb
biof-309-python/BIOF309-2016-Fall
mit
67b590d6b31a2b58b3ca842fea0b9db8
AT content Print out the gene names for all genes whose AT content is less than 0.5 and whose expression level is greater than 200.
def is_at_rich(dna): length = len(dna) a_count = dna.upper().count('A') t_count = dna.upper().count('T') at_content = (a_count + t_count) / length return at_content < 0.5 import csv with open('data.csv') as csvfile: raw_data = csv.reader(csvfile) for row in raw_data: if is_at_rich(row[1]) and int(row[3]) > 200: print(row[2])
Week_06/Week06 - 01 - Homework Solutions.ipynb
biof-309-python/BIOF309-2016-Fall
mit
f6ed7e85533a8bc1d0083f7286254ad3
Complex condition Print out the gene names for all genes whose name begins with “k” or “h” except those belonging to Drosophila melanogaster.
import csv with open('data.csv') as csvfile: raw_data = csv.reader(csvfile) for row in raw_data: if (row[2].startswith('k') or row[2].startswith('h')) and row[0] != 'Drosophila melanogaster': print(row[2])
Week_06/Week06 - 01 - Homework Solutions.ipynb
biof-309-python/BIOF309-2016-Fall
mit
3cab99145d3cc784cf843fb6f401c715
High low medium For each gene, print out a message giving the gene name and saying whether its AT content is high (greater than 0.65), low (less than 0.45) or medium (between 0.45 and 0.65).
def at_percentage(dna): length = len(dna) a_count = dna.upper().count('A') t_count = dna.upper().count('T') at_content = (a_count + t_count) / length return at_content import csv with open('data.csv') as csvfile: raw_data = csv.reader(csvfile) for row in raw_data: at_percent = at_percentage(row[1]) if at_percent > 0.65: print('AT content is high') elif at_percent < 0.45: print('AT content is high') else: print('AT content is medium')
Week_06/Week06 - 01 - Homework Solutions.ipynb
biof-309-python/BIOF309-2016-Fall
mit
b49707d76e977d49efb2889f1851da7b
定义要训练的模型
import collections import time import tensorflow as tf import tensorflow_federated as tff source, _ = tff.simulation.datasets.emnist.load_data() def map_fn(example): return collections.OrderedDict( x=tf.reshape(example['pixels'], [-1, 784]), y=example['label']) def client_data(n): ds = source.create_tf_dataset_for_client(source.client_ids[n]) return ds.repeat(10).batch(20).map(map_fn) train_data = [client_data(n) for n in range(10)] input_spec = train_data[0].element_spec def model_fn(): model = tf.keras.models.Sequential([ tf.keras.layers.InputLayer(input_shape=(784,)), tf.keras.layers.Dense(units=10, kernel_initializer='zeros'), tf.keras.layers.Softmax(), ]) return tff.learning.from_keras_model( model, input_spec=input_spec, loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]) trainer = tff.learning.build_federated_averaging_process( model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02)) def evaluate(num_rounds=10): state = trainer.initialize() for round in range(num_rounds): t1 = time.time() state, metrics = trainer.next(state, train_data) t2 = time.time() print('Round {}: loss {}, round time {}'.format(round, metrics.loss, t2 - t1))
site/zh-cn/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb
tensorflow/docs-l10n
apache-2.0
6778d3ef0ee35200cb70b6c5092d7c03
设置远程执行器 默认情况下,TFF 在本地执行所有计算。在此步骤中,我们指示 TFF 连接到我们在上面设置的 Kubernetes 服务。确保在此处复制服务的 IP 地址。
import grpc ip_address = '0.0.0.0' #@param {type:"string"} port = 80 #@param {type:"integer"} channels = [grpc.insecure_channel(f'{ip_address}:{port}') for _ in range(10)] tff.backends.native.set_remote_execution_context(channels)
site/zh-cn/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb
tensorflow/docs-l10n
apache-2.0
667e7d77ec343f313d1ff260b16791ee
运行训练
evaluate()
site/zh-cn/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb
tensorflow/docs-l10n
apache-2.0
2f1f6dffdcfed9f5c28eab21181c0ce8
The toy data created above consists of 4 gaussian blobs, having 200 points each, centered around the vertices of a rectancle. Let's plot it for convenience.
import matplotlib.pyplot as plt %matplotlib inline figure,axis = plt.subplots(1,1) axis.plot(rectangle[0], rectangle[1], 'o', color='r', markersize=5) axis.set_xlim(-5,15) axis.set_ylim(-50,150) axis.set_title('Toy data : Rectangle') plt.show()
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
a7a510fab61ce27053087b8e67f04170
With data at our disposal, it is time to apply KMeans to it using the KMeans class in Shogun. First we construct Shogun features from our data:
train_features = sg.create_features(rectangle)
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
6a932953fdb6bc1a79e3ac7253e751ba
Next we specify the number of clusters we want and create a distance object specifying the distance metric to be used over our data for our KMeans training:
# number of clusters k = 2 # distance metric over feature matrix - Euclidean distance distance = sg.create_distance('EuclideanDistance') distance.init(train_features, train_features)
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
9bb3722ef10c6b992c79e6a2fcb74ede
Next, we create a KMeans object with our desired inputs/parameters and train:
# KMeans object created kmeans = sg.create_machine("KMeans", k=k, distance=distance) # KMeans training kmeans.train()
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
d20b5035a199ecea140b19dbc16285d5
Now that training has been done, let's get the cluster centers and label for each data point
# cluster centers centers = kmeans.get("cluster_centers") # Labels for data points result = kmeans.apply()
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
5d549dd0e9f84ab4d07d87e931b2c84c
Finally let us plot the centers and the data points (in different colours for different clusters):
def plotResult(title = 'KMeans Plot'): figure,axis = plt.subplots(1,1) for i in range(totalPoints): if result.get("labels")[i]==0.0: axis.plot(rectangle[0,i], rectangle[1,i], 'go', markersize=3) else: axis.plot(rectangle[0,i], rectangle[1,i], 'yo', markersize=3) axis.plot(centers[0,0], centers[1,0], 'go', markersize=10) axis.plot(centers[0,1], centers[1,1], 'yo', markersize=10) axis.set_xlim(-5,15) axis.set_ylim(-50,150) axis.set_title(title) plt.show() plotResult('KMeans Results')
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
3dd29b8811384452152485e73b9100f5
<b>Note:</b> You might not get the perfect result always. That is an inherent flaw of KMeans algorithm. In subsequent sections, we will discuss techniques which allow us to counter this.<br> Now that we have already worked out a simple KMeans implementation, it's time to understand certain specifics of KMeans implementaion and the options provided by Shogun to its users. Initialization of cluster centers The KMeans algorithm requires that the cluster centers are initialized with some values. Shogun offers 3 ways to initialize the clusters. <ul><li>Random initialization (default)</li><li>Initialization by hand</li><li>Initialization using <a href="http://en.wikipedia.org/wiki/K-means%2B%2B">KMeans++ algorithm</a></li></ul>Unless the user supplies initial centers or tells Shogun to use KMeans++, Random initialization is the default method used for cluster center initialization. This was precisely the case in the example discussed above. Initialization by hand There are 2 ways to initialize centers by hand. One way is to pass on the centers during KMeans object creation, as follows:
initial_centers = np.array([[0.,10.],[50.,50.]]) # initial centers passed kmeans = sg.create_machine("KMeans", k=k, distance=distance, initial_centers=initial_centers)
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
0678b915b117c291c165a0910b2b6e4d
Now, let's first get results by repeating the rest of the steps:
# KMeans training kmeans.train(train_features) # cluster centers centers = kmeans.get("cluster_centers") # Labels for data points result = kmeans.apply() # plot the results plotResult('Hand initialized KMeans Results 1')
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
062abf18ea47a41804a2ae659d69cbf9
The other way to initialize centers by hand is as follows:
new_initial_centers = np.array([[5.,5.],[0.,100.]]) # set new initial centers kmeans.put("initial_centers", new_initial_centers)
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
da8def0610ca09ca51cccd553d915f5e
Let's complete the rest of the code to get results.
# KMeans training kmeans.train(train_features) # cluster centers centers = kmeans.get("cluster_centers") # Labels for data points result = kmeans.apply() # plot the results plotResult('Hand initialized KMeans Results 2')
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
d42dfe2744ca822b0cfc687384abb33f
Note the difference that inititial cluster centers can have on final result. Initializing using KMeans++ algorithm In Shogun, a user can also use <a href="http://en.wikipedia.org/wiki/K-means%2B%2B">KMeans++ algorithm</a> for center initialization. Using KMeans++ for center initialization is beneficial because it reduces total iterations used by KMeans and also the final centers mostly correspond to the global minima, which is often not the case with KMeans with random initialization. One of the ways to use KMeans++ is to set flag as <i>true</i> during KMeans object creation, as follows:
# set flag for using KMeans++ kmeans = sg.create_machine("KMeans", k=k, distance=distance, kmeanspp=True)
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
46506d3bead883da1b5736e9bbca8287
Completing rest of the steps to get result:
# KMeans training kmeans.train(train_features) # cluster centers centers = kmeans.get("cluster_centers") # Labels for data points result = kmeans.apply() # plot the results plotResult('KMeans with KMeans++ Results')
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
2890fdc3d6f85f6f3563813d1f629b13
Training Methods Shogun offers 2 training methods for KMeans clustering:<ul><li><a href='http://en.wikipedia.org/wiki/K-means_clustering#Standard_algorithm'>Classical Lloyd's training</a> (default)</li><li><a href='http://www.eecs.tufts.edu/~dsculley/papers/fastkmeans.pdf'>mini-batch KMeans training</a></li></ul>Lloyd's training method is used by Shogun by default unless user switches to mini-batch training method. Mini-Batch KMeans Mini-batch KMeans is very useful in case of extremely large datasets and/or very high dimensional data which is often the case in text mining. One can switch to Mini-batch KMeans training while creating KMeans object as follows:
# set training method to mini-batch kmeans = sg.create_machine("KMeansMiniBatch", k=k, distance=distance)
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
28cc80e54abf46e61cdbbb85eeb8ba7f
Completing the code to get results:
# KMeans training kmeans.train(train_features) # cluster centers centers = kmeans.get("cluster_centers") # Labels for data points result = kmeans.apply() # plot the results plotResult('Mini-batch KMeans Results')
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
c6465abc0fb87da1446f30a98045f325
Applying KMeans on Real Data In this section we see how useful KMeans can be in classifying the different varieties of Iris plant. For this purpose, we make use of Fisher's Iris dataset borrowed from the <a href='http://archive.ics.uci.edu/ml/datasets/Iris'>UCI Machine Learning Repository</a>. There are 3 varieties of Iris plants <ul><li>Iris Sensosa</li><li>Iris Versicolour</li><li>Iris Virginica</li></ul> The Iris dataset enlists 4 features that can be used to segregate these varieties, namely <ul><li>sepal length</li><li>sepal width</li><li>petal length</li><li>petal width</li></ul> It is additionally acknowledged that petal length and petal width are the 2 most important features (ie. features with very high class correlations)[refer to <a href='http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.names'>summary statistics</a>]. Since the entire feature vector is impossible to plot, we only plot these two most important features in order to understand the dataset (at least partially). Note that we could have extracted the 2 most important features by applying PCA (or any one of the many dimensionality reduction methods available in Shogun) as well.
with open(os.path.join(SHOGUN_DATA_DIR, 'uci/iris/iris.data')) as f: feats = [] # read data from file for line in f: words = line.rstrip().split(',') feats.append([float(i) for i in words[0:4]]) # create observation matrix obsmatrix = np.array(feats).T # plot the data figure,axis = plt.subplots(1,1) # First 50 data belong to Iris Sentosa, plotted in green axis.plot(obsmatrix[2,0:50], obsmatrix[3,0:50], 'o', color='green', markersize=5) # Next 50 data belong to Iris Versicolour, plotted in red axis.plot(obsmatrix[2,50:100], obsmatrix[3,50:100], 'o', color='red', markersize=5) # Last 50 data belong to Iris Virginica, plotted in blue axis.plot(obsmatrix[2,100:150], obsmatrix[3,100:150], 'o', color='blue', markersize=5) axis.set_xlim(-1,8) axis.set_ylim(-1,3) axis.set_title('3 varieties of Iris plants') plt.show()
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
31bd4ecb7fbce36d6e494fbf0fd49d64
In the above plot we see that the data points labelled Iris Sentosa form a nice separate cluster of their own. But in case of other 2 varieties, while the data points of same label do form clusters of their own, there is some mixing between the clusters at the boundary. Now let us apply KMeans algorithm and see how well we can extract these clusters.
def apply_kmeans_iris(data): # wrap to Shogun features train_features = sg.create_features(data) # number of cluster centers = 3 k = 3 # distance function features - euclidean distance = sg.create_distance('EuclideanDistance') distance.init(train_features, train_features) # initialize KMeans object, use kmeans++ to initialize centers [play around: change it to False and compare results] kmeans = sg.create_machine("KMeans", k=k, distance=distance, kmeanspp=True) # training method is Lloyd by default [play around: change it to mini-batch by uncommenting the following lines] #kmeans = sg.create_machine("KMeansMiniBatch", k=k, distance=distance) # training kmeans kmeans.train(train_features) # labels for data points result = kmeans.apply() return result result = apply_kmeans_iris(obsmatrix)
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
2f45bf0ce97f89f7f017fa40602dde91
Now let us create a 2-D plot of the clusters formed making use of the two most important features (petal length and petal width) and compare it with the earlier plot depicting the actual labels of data points.
# plot the clusters over the original points in 2 dimensions figure,axis = plt.subplots(1,1) for i in range(150): if result.get("labels")[i]==0.0: axis.plot(obsmatrix[2,i],obsmatrix[3,i],'ro', markersize=5) elif result.get("labels")[i]==1.0: axis.plot(obsmatrix[2,i],obsmatrix[3,i],'go', markersize=5) else: axis.plot(obsmatrix[2,i],obsmatrix[3,i],'bo', markersize=5) axis.set_xlim(-1,8) axis.set_ylim(-1,3) axis.set_title('Iris plants clustered based on attributes') plt.show()
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
d2c42891123e57fc0a50a76634698424
From the above plot, it can be inferred that the accuracy of KMeans algorithm is very high for Iris dataset. Don't believe me? Alright, then let us make use of one of Shogun's clustering evaluation techniques to formally validate the claim. But before that, we have to label each sample in the dataset with a label corresponding to the class to which it belongs.
# first 50 are iris sensosa labelled 0, next 50 are iris versicolour labelled 1 and so on labels = np.concatenate((np.zeros(50),np.ones(50),2.*np.ones(50)),0) # bind labels assigned to Shogun multiclass labels ground_truth = sg.create_labels(np.array(labels,dtype='float64'))
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
1f24b5312d2c7c9a00da18fe6ea9cd86
Now we can compute clustering accuracy making use of the ClusteringAccuracy class in Shogun
def analyzeResult(result): # shogun object for clustering accuracy AccuracyEval = sg.create_evaluation("ClusteringAccuracy") # evaluates clustering accuracy accuracy = AccuracyEval.evaluate(result, ground_truth) # find out which sample points differ from actual labels (or ground truth) compare = result.get("labels")-labels diff = np.nonzero(compare) return (diff,accuracy) (diff,accuracy_4d) = analyzeResult(result) print('Accuracy : ' + str(accuracy_4d)) # plot the difference between ground truth and predicted clusters figure,axis = plt.subplots(1,1) axis.plot(obsmatrix[2,:],obsmatrix[3,:],'x',color='black', markersize=5) axis.plot(obsmatrix[2,diff],obsmatrix[3,diff],'x',color='r', markersize=7) axis.set_xlim(-1,8) axis.set_ylim(-1,3) axis.set_title('Difference') plt.show()
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
fb87254e45c3e61704cf0ae9200035c4
In the above plot, wrongly clustered data points are marked in red. We see that the Iris Sentosa plants are perfectly clustered without error. The Iris Versicolour plants and Iris Virginica plants are also clustered with high accuracy, but there are some plant samples of either class that have been clustered with the wrong class. This happens near the boundary of the 2 classes in the plot and was well expected. Having mastered KMeans, it's time to move on to next interesting topic. PCA as a preprocessor to KMeans KMeans is highly affected by the <i>curse of dimensionality</i>. So, dimension reduction becomes an important preprocessing step. Shogun offers a variety of dimension reduction techniques to choose from. Since our data is not very high dimensional, PCA is a good choice for dimension reduction. We have already seen the accuracy of KMeans when all four dimensions are used. In the following exercise we shall see how the accuracy varies as one chooses lower dimensions to represent data. 1-Dimensional representation Let us first apply PCA to reduce training features to 1 dimension
def apply_pca_to_data(target_dims): train_features = sg.create_features(obsmatrix) submean = sg.create_transformer("PruneVarSubMean", divide_by_std=False) submean.fit(train_features) submean.transform(train_features) preprocessor = sg.create_transformer("PCA", target_dim=target_dims) preprocessor.fit(train_features) pca_transform = preprocessor.get("transformation_matrix") new_features = np.dot(pca_transform.T, train_features.get("feature_matrix")) return new_features oneD_matrix = apply_pca_to_data(1)
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
4ae7e517fe2dc6bab926824a7c6c2041
Next, let us get an idea of the data in 1-D by plotting it.
figure,axis = plt.subplots(1,1) # First 50 data belong to Iris Sentosa, plotted in green axis.plot(oneD_matrix[0,0:50], np.zeros(50), 'go', markersize=5) # Next 50 data belong to Iris Versicolour, plotted in red axis.plot(oneD_matrix[0,50:100], np.zeros(50), 'ro', markersize=5) # Last 50 data belong to Iris Virginica, plotted in blue axis.plot(oneD_matrix[0,100:150], np.zeros(50), 'bo', markersize=5) axis.set_xlim(-5,5) axis.set_ylim(-1,1) axis.set_title('3 varieties of Iris plants') plt.show()
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
6def01bd7cab13dcf977f8d82de472f3
Now that we have the results, the inevitable step is to check how good these results are.
(diff,accuracy_1d) = analyzeResult(result) print('Accuracy : ' + str(accuracy_1d)) # plot the difference between ground truth and predicted clusters figure,axis = plt.subplots(1,1) axis.plot(oneD_matrix[0,:],np.zeros(150),'x',color='black', markersize=5) axis.plot(oneD_matrix[0,diff],np.zeros(len(diff)),'x',color='r', markersize=7) axis.set_xlim(-5,5) axis.set_ylim(-1,1) axis.set_title('Difference') plt.show()
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
136463dcdebc588ee0f41094cac4216e
2-Dimensional Representation We follow the same steps as above and get the clustering accuracy. STEP 1 : Apply PCA and plot the data (plotting is optional)
twoD_matrix = apply_pca_to_data(2) figure,axis = plt.subplots(1,1) # First 50 data belong to Iris Sentosa, plotted in green axis.plot(twoD_matrix[0,0:50], twoD_matrix[1,0:50], 'go', markersize=5) # Next 50 data belong to Iris Versicolour, plotted in red axis.plot(twoD_matrix[0,50:100], twoD_matrix[1,50:100], 'ro', markersize=5) # Last 50 data belong to Iris Virginica, plotted in blue axis.plot(twoD_matrix[0,100:150], twoD_matrix[1,100:150], 'bo', markersize=5) axis.set_title('3 varieties of Iris plants') plt.show()
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
d72ded1d558b4933bf3f95a2f708ad7d
STEP 3: Get the accuracy of the results
(diff,accuracy_2d) = analyzeResult(result) print('Accuracy : ' + str(accuracy_2d)) # plot the difference between ground truth and predicted clusters figure,axis = plt.subplots(1,1) axis.plot(twoD_matrix[0,:],twoD_matrix[1,:],'x',color='black', markersize=5) axis.plot(twoD_matrix[0,diff],twoD_matrix[1,diff],'x',color='r', markersize=7) axis.set_title('Difference') plt.show()
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
310bc05774efbf7f9a176447f1ec74f8
STEP 3: Get accuracy of results. In this step, the 'difference' plot positions data points based petal length and petal width in the original data. This will enable us to visually compare these results with that of KMeans applied to 4-Dimensional data (ie. our first result on Iris dataset)
(diff,accuracy_3d) = analyzeResult(result) print('Accuracy : ' + str(accuracy_3d)) # plot the difference between ground truth and predicted clusters figure,axis = plt.subplots(1,1) axis.plot(obsmatrix[2,:],obsmatrix[3,:],'x',color='black', markersize=5) axis.plot(obsmatrix[2,diff],obsmatrix[3,diff],'x',color='r', markersize=7) axis.set_title('Difference') axis.set_xlim(-1,8) axis.set_ylim(-1,3) plt.show()
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
836c67caa9915dec30228b7495d0e17c
Finally, let us plot clustering accuracy vs. number of dimensions to consolidate our results.
from scipy.interpolate import interp1d x = np.array([1, 2, 3, 4]) y = np.array([accuracy_1d, accuracy_2d, accuracy_3d, accuracy_4d]) f = interp1d(x, y) xnew = np.linspace(1,4,10) plt.plot(x,y,'o',xnew,f(xnew),'-') plt.xlim([0,5]) plt.xlabel('no. of dims') plt.ylabel('Clustering Accuracy') plt.title('PCA Results') plt.show()
doc/ipython-notebooks/clustering/KMeans.ipynb
geektoni/shogun
bsd-3-clause
ef9e617d0e9294b50929711afcf75368
iii. Explain what private and public do The private and public, also protected restrict the access to the class members. A private member variable or function cannot be accessed, or even viewed from outside the class. Only the class and friend functions can access private members. A public member is accessible from anywhere outside the class but within a program. You can set and get the value of public variables without any member function. A protected member variable or function is very similar to a private member but it provided one additional benefit that they can be accessed in child classes which are called derived classes. //From: https://www.tutorialspoint.com/cplusplus/cpp_class_access_modifiers.htm
//*Quote from comments:* // This structure type is private to the class, and used as a form of // linked list in order to contain the actual (static) data stored by the Stack class
Stack.ipynb
chapman-cs510-2016f/cw-12-redyellow
mit
9c1782821516941c28a0820df2ae8afd
iv. Explain what size_t is used for It is a type that can represent the size of any object in bytes: size_t is the type returned by the sizeof operator and is widely used in the standard library to represent sizes and counts. //From: http://www.cplusplus.com/reference/cstring/size_t/
//*Quote from comments:* // Size method // Specifying const tells the compiler that the method will not change the // internal state of the instance of the class
Stack.ipynb
chapman-cs510-2016f/cw-12-redyellow
mit
fa7b74e174b1bfe618a87f0bee4955d6
v. Explain why this code avoids the use of C pointers First, raw pointers must under no circumstances own memory. That means you must delete after use it. Second, most uses of pointers in C++ are unnecessary. C++ has very strong support for value semantics, you can use smart pointer, container classes, design patterns like RAII, ect, instead of pointer. In computer science, a smart pointer is an abstract data type that simulates a pointer while providing additional features, such as automatic garbage collection or bounds checking. These additional features are intended to reduce bugs caused by the misuse of pointers while retaining efficiency. Smart pointers typically keep track of the objects they point to for the purpose of memory management. The misuse of pointers is a major source of bugs: the constant allocation, deallocation and referencing that must be performed by a program written using pointers introduces the risk that memory leaks will occur. Smart pointers try to prevent memory leaks by making the resource deallocation automatic: when the pointer (or the last in a series of pointers) to an object is destroyed, for example because it goes out of scope, the pointed object is destroyed too. //From: http://softwareengineering.stackexchange.com/questions/56935/why-are-pointers-not-recommended-when-coding-with-c vi. Explain what new and delete do in C++, and how they relate to what you have done in C "New" creates a pointer to an allocated memory block. "Delete" deallocates the memory that is allocated by "new". It works differently from the way in C: Allocate memory: <br> C++: Node *n = new Node(); <br> C : Node *n = (Node *)calloc(1, sizeof(Node)); Deallocate memory: <br> C++: delete n; <br> C : free(n); vii. Explain what a memory leak is, and what you should do to avoid it Memory leak means running out of system memory. When a program needs to store some temporary information during execution, it can dynamically request a chunk of memory from the system. However, the system has a fixed amount of total memory available. If one application uses up all of the system’s free memory, then other applications will not be able to obtain the memory that they require. //From: https://msdn.microsoft.com/en-us/library/ms859408.aspx I got three ways to avoid memory leak: <br> 1.free(C) or delete(C++) the memory you allocated after finishing use it; <br> 2.use smart pointer(C++) or other "garbage collector" to deallocate memory automatically after finishing use; <br> 3.use fewer pointers if it is possible. viii. Explain what a unique_ptr is and how it relates to both new and C pointers std::unique_ptr is a smart pointer that owns and manages another object through a pointer and disposes of that object when the unique_ptr goes out of scope. The object is disposed of using the associated deleter when either of the following happens: <br> the managing unique_ptr object is destroyed <br> the managing unique_ptr object is assigned another pointer via operator= or reset(). It uses "new Node()" to allocate a new pointer, new_node_ptr, whose type is std::unique_ptr. It is a pointer but would deallocate the memory automatically when it is useless. //From: http://en.cppreference.com/w/cpp/memory/unique_ptr
//*Quote from comments:* // However, by using the "unique_ptr" type above, we carefully avoid any // explicit memory allocation by using the allocation pre-defined inside the // unique_ptr itself. By using memory-safe structures in this way, we are using // the "Rule of Zero" and simplifying our life by defining ZERO of them: // https://rmf.io/cxx11/rule-of-zero/ // http://www.cplusplus.com/reference/memory/unique_ptr/
Stack.ipynb
chapman-cs510-2016f/cw-12-redyellow
mit
15b6414d1a4ea410290dc391e91fe17b
ix. Explain what a list initializer does Constructor is a special non-static member function of a class that is used to initialize objects of its class type. In the definition of a constructor of a class, member initializer list specifies the initializers for direct and virtual base subobjects and non-static data members. The order of member initializers in the list is irrelevant: the actual order of initialization is as follows: 1) If the constructor is for the most-derived class, virtual base classes are initialized in the order in which they appear in depth-first left-to-right traversal of the base class declarations (left-to-right refers to the appearance in base-specifier lists). <br> 2) Then, direct base classes are initialized in left-to-right order as they appear in this class's base-specifier list. <br> 3) Then, non-static data members are initialized in order of declaration in the class definition. <br> 4) Finally, the body of the constructor is executed. //From:http://en.cppreference.com/w/cpp/language/initializer_list
//*Quote from comments* // Implementation of default constructor Stack::Stack() : depth(0) // internal depth is 0 , head(nullptr) // internal linked list is null to start {}; // The construction ": var1(val1), var2(val2) {}" is called a // "list initializer" for a constructor, and is the preferred // way of setting default field values for a class instance // Here 0 is the default value for Stack::depth // and nullptr is the default value for Stack::head
Stack.ipynb
chapman-cs510-2016f/cw-12-redyellow
mit
bd51bd509ec9d5994ba4a0c435332b24
x. Explain what the "Rule of Zero" is, and how it relates to the "Rule of Three" Rule of Zero: Classes that have custom destructors, copy/move constructors or copy/move assignment operators should deal exclusively with ownership (which follows from the Single Responsibility Principle). Other classes should not have custom destructors, copy/move constructors or copy/move assignment operators. Rule of Three: a class requires a user-defined destructor, a user-defined copy constructor, or a user-defined copy assignment operator. It almost certainly requires all three. Rule of Zero does not need those three functions, but Rule of Three requires them. //From: http://en.cppreference.com/w/cpp/language/rule_of_three
//*Quote from comments:* // Normally we would have to implement the following things in C++ here: // 1) Class Destructor : to deallocate memory when a Stack is deleted // ~Stack(); // // 2) Copy Constructor : to define what Stack b(a) does when a is a Stack // This should create a copy b of the Stack a, but // should be defined appropriately to do that // Stack(const Stack&); // // 3) Copy Assignment : to define what b = a does when a is a Stack // This should create a shallow copy of the outer // structure of a, but leave the inner structure as // pointers to the memory contained in a, and should // be defined appropriately to do that // Stack& operator=(const Stack&); // // The need for defining ALL THREE of these things when managing memory for a // class explicitly is known as the "Rule of Three", and is standard // http://stackoverflow.com/questions/4172722/what-is-the-rule-of-three // // However, by using the "unique_ptr" type above, we carefully avoid any // explicit memory allocation by using the allocation pre-defined inside the // unique_ptr itself. By using memory-safe structures in this way, we are using // the "Rule of Zero" and simplifying our life by defining ZERO of them: // https://rmf.io/cxx11/rule-of-zero/ // http://www.cplusplus.com/reference/memory/unique_ptr/
Stack.ipynb
chapman-cs510-2016f/cw-12-redyellow
mit
1d0629da09cf018423b0930672ad325d
I accomplished the above by running this command at the command prompt: THEANO_FLAGS='mode=FAST_RUN,device=gpu,floatX=float32' jupyter notebook
#import theano from theano import function, config, sandbox, shared import theano.tensor as T import numpy as np import scipy import time
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
ff86db5c7a26cf0c2a76fbe0a2ad8ee7
More theano setup in jupyter notebook boilerplate
print( theano.config.device ) print( theano.config.lib.cnmem) # cf. http://deeplearning.net/software/theano/library/config.html print( theano.config.print_active_device)# Print active device at when the GPU device is initialized. import os, sys os.getcwd() os.listdir( os.getcwd() ) %run gpu_test.py THEANO_FLAGS='mode=FAST_RUN,device=gpu,floatX=float32,lib.cnmem=0.85' # note lib.cnmem option for CnMem
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
ada9f50ee22c1c155d349ca0788dd959
sample data boilerplate
# Load the diabetes dataset diabetes = sklearn.datasets.load_diabetes() diabetes_X = diabetes.data diabetes_Y = diabetes.target #diabetes_X1 = diabetes_X[:,np.newaxis,2] diabetes_X1 = diabetes_X[:,np.newaxis, 2].astype(theano.config.floatX) #diabetes_Y = diabetes_Y.reshape( diabetes_Y.shape[0], 1) diabetes_Y = diabetes_Y.astype(theano.config.floatX)
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
aabeb56363a5d8fb30f9df1e70f24ed8
Linear regression cf. Linear Regression In Theano 1_linear_regression.py from github Newmu/Theano-Tutorials Train on $m$ number of input data points
m_lin = diabetes_X1.shape[0]
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
aefb43d1c3b4d35851ac4183df3a4f5b
input, output variables $x$, $y$ for Theano
#x1 = T.vector('x1') # X1, input data, with only 1 feature, i.e. X \in \mathbb{R}^N, d=1 #ylin = T.vector('ylin') # target variable for linear regression, so that Y \in \mathbb{R} x1 = T.scalar('x1') # X1, input data, with only 1 feature, i.e. X \in \mathbb{R}^N, d=1 ylin = T.scalar('ylin') # target variable for linear regression, so that Y \in \mathbb{R}
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
d6343940c864fb5b3685e6a97c1cdcd4
Parameters (for a linear slope) $$ (\theta^0, \theta^1) \in \mathbb{R}^2 $$
thet0_init_val = np.random.randn() thet1_init_val = np.random.randn() thet0 = theano.shared( value=thet0_init_val, name='thet0', borrow=True) # \theta^0 thet1 = theano.shared( thet1_init_val, name='thet1', borrow=True) # \theta^1
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
f4bc6785ca3a3e72feca2bb67318bfcf
hypothesis function $h_{\theta}$ $$ h_{\theta}(x) = \theta_1 x + \theta_0 $$
#h_thet = T.dot( thet1, x1) + thet0 # whereas, Newmu uses h_thet = thet1 * x1 + thet0
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
b4aaa73553d987ef8129867972380b68
Cost function $J(\theta)$
# roshansanthosh uses #Jthet = T.sum( T.pow(h_thet-ylin,2))/(2*m_lin) # whereas, Newmu uses # Jthet = T.mean( T.sqr( thet_1*x1 + thet_0 - ylin )) Jthet = T.mean( T.pow( h_thet-ylin,2))/2 #Jthet = sandbox.cuda.basic_ops.gpu_from_host( T.mean( # sandbox.cuda.basic_ops.gpu_from_host( T.pow( h_thet-ylin,2))))/2
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
1f73102fcf73c4608b3ac56019f63556
$$ \text{grad}{\theta}J(\theta) = ( \text{grad}{\theta^0} J , \text{grad}_{\theta^1} J ) $$
grad_thet0 = T.grad(Jthet, thet0) grad_thet1 = T.grad(Jthet, thet1) # so-called "learning rate" gamma = 0.01
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
05050bcf882753cee155881e5d973da3
Note that "updates (iterable over pairs (shared_variable, new_expression) List, tuple or dict.) – expressions for new SharedVariable values" cf. Theano doc
train_lin = theano.function(inputs = [x1,ylin], outputs=Jthet, updates=[[thet1,thet1-gamma*grad_thet1],[thet0,thet0-gamma*grad_thet0]]) test_lin = theano.function([x1],h_thet) #X1_lin_in = shared( diabetes_X1 ,'float32') #Y_lin_out = shared( diabetes_Y, 'float32') training_steps = 1000 # 10000 sh_diabetes_X1 = shared( diabetes_X1 , borrow=True) sh_diabetes_Y = shared( diabetes_Y, borrow=True) """ for i in range(training_steps): for x,y in zip( diabetes_X1, diabetes_Y): Jthet_val = train_lin( x, y ) """ for i in range(training_steps): # for x,y in zip( sh_diabetes_X1, sh_diabetes_Y) : # Jthet_val = train_lin( x,y) Jthet_val = train_lin( sh_diabetes_X1, sh_diabetes_Y) print(Jthet_val) print( thet0.get_value() ); print( thet1.get_value() ) test_lin_out = np.array( [ test_lin( x ) for x in diabetes_X1 ] ) plt.plot(diabetes_X1,diabetes_Y,'ro') plt.plot(diabetes_X1,test_lin_out) if any([x.op.__class__.__name__ in ['GpuGemm','GpuGemv'] for x in train_lin.maker.fgraph.toposort()]): print("Used the gpu") else: print(train_lin.maker.fgraph.toposort()) if np.any([isinstance(x.op,T.Elemwise) for x in train_lin.maker.fgraph.toposort()]): print("Used the cpu")
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
70828401d3a6912ef29a74ebbdae4a1c
Linear Algebra and theano cf. Week 1, Linear Algebra Review, Coursera, Machine Learning with Ng I'll take this opportunity to provide a dictionary between the syntax of linear algebra math and numpy. Essentially, what I did was take Coursera's Week 1, Linear Algebra Review and then translated the math into theano, and in particular, running theano on the GPU. Other reference that I used was https://simplyml.com/linear-algebra-shootout-numpy-vs-theano-vs-tensorflow-2/ Linear Algebra Shootout: NumPy vs. Theano vs. TensorFlow by Charanpal Dhanjal - 14/07/16 Matrix addition cf. Coursera, Intro. to Machine Learning, Linear Algebra Review, Addition and Scalar Multiplication
A = T.matrix('A') B = T.matrix('B') #matadd = function([A,B], A+B) #matadd = function([A,B],sandbox.cuda.basic_ops.gpu_from_host(A+B) ) # Note: we are just defining the expressions, nothing is evaluated here! C = sandbox.cuda.basic_ops.gpu_from_host(A+B) matadd = function([A,B], C) #A = T.dmatrix('A') #B = T.dmatrix('B') A = T.matrix('A') B = T.matrix('B') C_out = A + B matadd_CPU = function([A,B], C_out) A_eg = shared( np.array([[8,6,9],[10,1,10]]), 'float32') B_eg = shared( np.array([[3,10,2],[6,1,-1]]), 'float32') A_eg_CPU = np.array([[8,6,9],[10,1,10]]) B_eg_CPU = np.array([[3,10,2],[6,1,-1]]) print(A_eg_CPU) print( type( A_eg_CPU )) print( A_eg_CPU.shape) print( B_eg_CPU.shape) print( matadd.maker.fgraph.toposort() ) print( matadd_CPU.maker.fgraph.toposort() ) matadd( A_eg, B_eg)
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
6de0a3b19d8ee6bfa4c837be1a47d33d
The way to do it, to "force" on the GPU, is like this (cf. Speeding up your Neural Network with Theano and the GPU - Wild ML):
np.random.randn( *A_eg_CPU.shape ) C_out = theano.shared( np.random.randn( *A_eg_CPU.shape).astype('float32') ) C_out.type() #A_in = shared( A_eg_CPU, "float32") #A_in = shared( A_eg_CPU, "float32") A_in = shared( A_eg_CPU.astype("float32"), "float32") B_in = shared( B_eg_CPU.astype("float32"), "float32") #C_out_GPU = A_in + B_in C_out_GPU = sandbox.cuda.basic_ops.gpu_from_host(A_in+B_in) matadd_GPU = theano.function( [], C_out_GPU) C_out_GPU_result = matadd_GPU() C_out_GPU_result
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
503278236b2d7f79c00a0f4799964998
Notice how DIFFERENT this setup or syntax is: we have to set up tensor or matrix shared variables A_n, B_in, which are then used to define the theano function, theano.function. "By using shared variables we ensure that they are present in the GPU memory". cf. Linear Algebra Shootout: NumPy vs. Theano vs. TensorFlow
print( matadd_GPU.maker.fgraph.toposort() ) #if np.any([isinstance(C_out_GPU.op, tensor.Elemwise ) and if np.any([isinstance( C_out_GPU.op, T.Elemwise ) and ('Gpu' not in type( C_out_GPU.op).__name__) for x in matadd_GPU.maker.fgraph.toposort()]) : print('Used the cpu') else: print('Used the gpu') matadd_CPU( A_eg_CPU.astype("float32"), B_eg_CPU.astype("float32") ) type(A_eg) print( type( numpy.asarray(rng.rand(2000)) ) ) numpy.asarray(rng.rand(2000)).shape
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
4f42c93ce67a36c10d5c93e23ce726ce
Bottom Line: there are 2 ways of doing linear algebra on the GPU symbolic computation with the usual arguments $$ A + B = C \in \text{Mat}_{\mathbb{R}}(M,N) $$ $ \forall \, A, B \in \text{Mat}_{\mathbb{R}}(M,N)$
A = T.matrix('A') B = T.matrix('B') C = sandbox.cuda.basic_ops.gpu_from_host( A + B ) # vs. # C = A + B # this will result in an output array on the host, as opposed to CudaNdarray on device matadd = function([A,B], C) print( matadd.maker.fgraph.toposort() ) matadd( A_eg_CPU.astype("float32"), B_eg_CPU.astype("float32") )
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
6c287c888cdfb42c44f7c3e299a8ec95
with shared variables
A_in = shared( A_eg_CPU.astype("float32"), "float32") # initialize with the input values, A_eg_CPU, anyway B_in = shared( B_eg_CPU.astype("float32"), "float32") # initialize with the input values B_eg_CPU, anyway # C_out = A_in + B_in # this version will output to the host as a numpy.ndarray # indeed, reading the graph, """ [GpuElemwise{add,no_inplace}(float32, float32), HostFromGpu(GpuElemwise{add,no_inplace}.0)] """ # this version immediately below, in 1 line, will result in a CudaNdarray on device C_out = sandbox.cuda.basic_ops.gpu_from_host(A_in+B_in) matadd_GPU = theano.function( [], C_out) print( matadd_GPU.maker.fgraph.toposort() ) C_out_result = matadd_GPU() C_out_result
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
fa37bea86729d7fa7db11e4c347539da
Scalar Multiplication (on the GPU) cf. Scalar Multiplication of Linear Algebra Review, coursera, Machine Learning Intro by Ng
A_2 = np.array( [[4,5],[1,7] ]) a = T.scalar('a') F = sandbox.cuda.basic_ops.gpu_from_host( a*A ) scalarmul = theano.function([a,A],F) print( scalarmul.maker.fgraph.toposort() ) scalarmul( np.float32( 2.), A_2.astype("float32"))
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
5b19d127814628f5c6132d78567bd034
Composition; Confirming that you can do composition of scalar multiplication on a matrix (or ring) addition Being able to do composition is very important in math
scalarmul( np.float32(2.), matadd( A_eg_CPU.astype("float32"), B_eg_CPU.astype("float32") ) ) u = T.vector('u') v = T.vector('v') w = sandbox.cuda.basic_ops.gpu_from_host( u + v) vecadd = theano.function( [u,v],w) t = sandbox.cuda.basic_ops.gpu_from_host( a * u) scalarmul_vec = theano.function([a,u], t) print(vecadd.maker.fgraph.toposort()) print(scalarmul_vec.maker.fgraph.toposort()) u_eg = np.array( [4,6,7], dtype="float32") v_eg = np.array( [2,1,0], dtype="float32") print( u_eg.shape) scalarmul_vec( np.float32(0.5), u_eg ) vecadd( scalarmul_vec( np.float32(0.5), u_eg ) , scalarmul_vec( np.float32(-3.), v_eg ) )
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
d8640e638edfcf3d4118b6dd616262f7
This was the computer equivalent to mathematical expression: $$ \left[ \begin{matrix} 4 \ 6 \ 7 \end{matrix} \right] /2 - 3 * \left[ \begin{matrix} 2 \ 1 \ 0 \end{matrix} \right] $$ sAxy or A-V multiplication or so-called "Gemv", or Matrix Multiplication on a vector, or linear transformation on a R-module, or vector space i.e. $$ Av = B $$
B_out = sandbox.cuda.basic_ops.gpu_from_host( T.dot(A,v)) AVmul = theano.function([A,v], B_out) print(AVmul.maker.fgraph.toposort()) AVmul( np.array([[1,0,3],[2,1,5],[3,1,2]]).astype("float32"), np.array([1,6,2]).astype("float32")) AVmul( np.array([[1,0,0],[0,1,0],[0,0,1]]).astype("float32"), np.array([1,6,2]).astype("float32"))
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
928202fb28a521c08bec68ba7b66b117
AB or Gemm or Matrix Multiplication, i.e. Ring multiplication i.e. $$ A*B = C $$
C_f = sandbox.cuda.basic_ops.gpu_from_host( T.dot(A,B)) matmul = theano.function([A,B], C_f) print( matmul.maker.fgraph.toposort()) matmul( np.array( [[1,3],[2,4],[0,5]] ).astype("float32"), np.array([[1,0],[2,3]]).astype("float32") )
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
5eec55a3bcedb85480b2beebc427f8a3
Inverse and Transpose cf. Inverse and Transpose
Ainverse = sandbox.cuda.basic_ops.gpu_from_host( T.inv(A)) Ainv = theano.function([A], Ainverse) print(Ainv.maker.fgraph.toposort()) Atranspose = sandbox.cuda.basic_ops.gpu_from_host( A.T) AT = theano.function([A],Atranspose) print(AT.maker.fgraph.toposort())
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
69b5cb488100f368e92f6384207f487e
Summation, sum, mean, scan Linear Regression (again), via Coursera's Machine Learning Intro by Ng, Programming Exercise 1 for Week 2 Boilerplate, load sample data
linregdata = pd.read_csv('./coursera_Ng/machine-learning-ex1/ex1/ex1data1.txt', header=None) X_linreg_training = linregdata.as_matrix([0]) # pandas.DataFrame.as_matrix convert frame to its numpy-array representation y_linreg_training = linregdata.as_matrix([1]) m_linreg_training = len(y_linreg_training) # number of training examples print( X_linreg_training.shape, type(X_linreg_training)) print( y_linreg_training.shape, type(y_linreg_training)) print m_linreg_training
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
ab6681d1d2ff87dc3ee7fa06d94015bd
Try representing $\theta$, parameters or "weights", of size $|\theta|$ which should be equal to the number of features $n$ (or $d$).
# theta_linreg = T.vector('theta_linreg') d = X_linreg_training.shape[1] # d = features # Declare Theano symbolic variables X = T.matrix('x') y = T.vector('y')
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
a31ca6444892d4fccfcc4ac0fb986a26
Preprocess training data (due to numpy's treatment of arrays) (note, this is not needed, if you use pandas to choose which column(s) you want to make into a numpy array)
#X_linreg_training = X_linreg_training.reshape( m_linreg_training,1) #y_linreg_training = y_linreg_training.reshape( m_linreg_training,1) # Instead, the training data X and test data values y are going to be represented by Theano symbolic variable above #X_linreg = theano.shared(X_linreg_training.astype("float32"),"float32") #y_linreg = theano.shared(y_linreg_training.astype("float32"),"float32") #theta_0 = np.zeros( ( d+1,1)); print(theta_0) theta_0 = np.zeros( d+1); print(theta_0) theta = theano.shared( theta_0.astype("float32"), "theta") alpha = np.float32(0.01) # learning rate gamma or alpha # Construct Theano "expression graph" predicted_vals = sandbox.cuda.basic_ops.gpu_from_host( T.dot(X,theta) ) # h_{\theta} m = np.float32( y_linreg_training.shape[0] ) J_theta = sandbox.cuda.basic_ops.gpu_from_host( T.dot( (T.dot(X,theta) - y).T, T.dot(X,theta) - y) * np.float32( 0.5 ) * np.float32( 1./ m ) ) # cost function update_theta = sandbox.cuda.basic_ops.gpu_from_host( theta - alpha * T.grad( J_theta, theta) ) gradientDescent = theano.function( inputs=[X,y], outputs=[predicted_vals,J_theta], updates=[(theta, update_theta)], name = "gradientDescent") print( gradientDescent.maker.fgraph.toposort() ) num_iters = 1500 J_History = []
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
5f60bc1e3b4d67904d3bfb13bb742975
Preprocess X to include intercepts
input_X_linreg = np.hstack( ( np.ones((m_linreg_training,1)), X_linreg_training ) ).astype("float32") y_linreg_training_processed = y_linreg_training.reshape( m_linreg_training,).astype("float32") J_History = [0 for iter in range(num_iters)] for iter in range(num_iters): predicted_vals_out, J_out = \ gradientDescent(input_X_linreg.astype("float32"), y_linreg_training_processed.astype("float32") ) J_History[iter] = J_out Deg = (np.random.randn(40,10).astype("float32"), np.random.randint(size=40,low=0,high=2).astype("float32") ) Deg[0].shape Deg[1].shape theta.get_value() dir( J_History[0] ) J_History[-5].gpudata plt.plot( [ele.gpudata for ele in J_History])
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
c75345ce4287e88b6ffa2df737ba106e
Denny Britz's way: http://www.wildml.com/2015/09/speeding-up-your-neural-network-with-theano-and-the-gpu/ Speeding up your Neural Network with Theano and the GPU and his jupyter notebook https://github.com/dennybritz/nn-theano/blob/master/nn-theano-gpu.ipynb nn-theano/nn-theano-gpu.ipynb
input_X_linreg.shape # GPU NOTE: Conversion to float32 to store them on the GPU! X = theano.shared( input_X_linreg.astype('float32'), name='X' ) y = theano.shared( y_linreg_training.astype('float32'), name='y') # GPU NOTE: Conversion to float32 to store them on the GPU! theta = theano.shared( np.vstack(theta_0).astype("float32"), name='theta') # Construct Theano "expression graph" predicted_vals = sandbox.cuda.basic_ops.gpu_from_host( T.dot(X,theta) ) # h_{\theta} m = np.float32( y_linreg_training.shape[0] ) # cost function J_theta, J_{\theta} J_theta = sandbox.cuda.basic_ops.gpu_from_host( ( T.dot( (T.dot(X,theta) - y).T, T.dot(X,theta) - y) * np.float32(0.5) * np.float32( 1./m) ).reshape([]) ) # cost function # reshape is to force "broadcast" into 0-dim. scalar for cost function update_theta = sandbox.cuda.basic_ops.gpu_from_host( theta - alpha * T.grad( J_theta, theta) ) # Note that we removed the input values because we will always use the same shared variable # GPU Note: Removed the input values to avoid copying data to the GPU. gradientDescent = theano.function( inputs=[], # outputs=[predicted_vals,J_theta], updates=[(theta, update_theta)], name = "gradientDescent") print( gradientDescent.maker.fgraph.toposort() ) #J_History = [0 for iter in range(num_iters)] for iter in range(num_iters): gradientDescent( ) print( np.vstack( theta_0).shape ) print( y_linreg_training.shape ) theta.get_value() # Profiling print( theano.config.profile ) # Do the vm/cvm linkers profile the execution time of Theano functions? print( theano.config.profile_memory ) # Do the vm/cvm linkers profile the memory usage of Theano functions? It only works when profile=True. theano.printing.debugprint(gradientDescent) #print( gradientDescent.profile.print_summary() ) dir( gradientDescent.profile)
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
cd54dd445c07db3bb17530efd8544faa
Testing the Linear Regression with (Batch) Gradient Descent classes in ./ML/
import sys import os #sys.path.append( os.getcwd() + '/ML') sys.path.append( os.getcwd() + '/ML' ) from linreg_gradDes import LinearReg, LinearReg_loaded #from ML import LinearReg, LinearReg_loaded
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
395f069b0c5e5f920007fc0bc544b967
Boilerplate for sample input data
linregdata1 = pd.read_csv('./coursera_Ng/machine-learning-ex1/ex1/ex1data1.txt', header=None) linregdata1.as_matrix([0]).shape linregdata1.as_matrix([1]).shape features = linregdata1.as_matrix([0]).shape[1] numberoftraining = linregdata1.as_matrix([0]).shape[0] LinReg_housing = LinearReg( features, numberoftraining , 0.01) Xin = LinReg_housing.preprocess_X( linregdata1.as_matrix([0])) ytest = linregdata1.as_matrix([1]).flatten() %time LinReg_housing.build_model( Xin, ytest ) LinRegloaded_housing = LinearReg_loaded( linregdata1.as_matrix([0]), linregdata1.as_matrix([1]), features, numberoftraining ) %time LinRegloaded_housing.build_model() print( LinReg_housing.gradientDescent.maker.fgraph.toposort() ) print( LinRegloaded_housing.gradientDescent.maker.fgraph.toposort() )
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
83500ddd1ed56f73bd5569af914bae15
Other (sample) datasets Consider feature normalization
def featureNormalize(X): """ FEATURENORMALIZE Normalizes the features in X FEATURENORMALIZE(X) returns a normalized version of X where the mean value of each feature is 0 and the standard deviation is 1. This is often a good preprocessing step to do when working with learning algorithms. """ # You need to set these values correctly X_norm = (X-X.mean(axis=0))/X.std(axis=0) mu = X.mean(axis=0) sigma = X.std(axis=0) return [X_norm, mu, sigma] linregdata2 = pd.read_csv('./coursera_Ng/machine-learning-ex1/ex1/ex1data2.txt', header=None) features = linregdata2.as_matrix().shape[1] - 1 numberoftraining = linregdata2.as_matrix().shape[0] Xdat = linregdata2.as_matrix( range(features) ) ytest = linregdata2.as_matrix( [features]) [Xnorm, mus,sigmas] = featureNormalize(Xdat) LinReg_housing2 = LinearReg( features, numberoftraining, 0.01) processed_X = LinReg_housing2.preprocess_X( Xnorm ) %time LinReg_housing2.build_model( processed_X, ytest.flatten(), 400) LinRegloaded_housing2 = LinearReg_loaded( Xnorm, ytest, features, numberoftraining ) %time LinRegloaded_housing2.build_model( 400)
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
05a67078732e8a77de6d52553baf691e
Diabetes data from sklearn, sci-kit learn
# Load the diabetes dataset diabetes = sklearn.datasets.load_diabetes() diabetes_X = diabetes.data diabetes_Y = diabetes.target #diabetes_X1 = diabetes_X[:,np.newaxis,2] diabetes_X1 = diabetes_X[:,np.newaxis, 2].astype(theano.config.floatX) #diabetes_Y = diabetes_Y.reshape( diabetes_Y.shape[0], 1) diabetes_Y = np.vstack( diabetes_Y.astype(theano.config.floatX) ) features1 = 1 numberoftraining = diabetes_Y.shape[0] LinReg_diabetes = LinearReg( features1, numberoftraining, 0.01) processed_X = LinReg_diabetes.preprocess_X( diabetes_X1 ) %time LinReg_diabetes.build_model( processed_X, diabetes_Y.flatten(), 10000) LinRegloaded_diabetes = LinearReg_loaded( diabetes_X1, diabetes_Y, features1, numberoftraining ) %time LinRegloaded_diabetes.build_model( 10000)
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
9ca1c51f3c737512959393723db58afe
Multiple number of features case:
features = diabetes_X.shape[1] LinReg_diabetes = LinearReg( features, numberoftraining, 0.01) processed_X = LinReg_diabetes.preprocess_X( diabetes_X ) %time LinReg_diabetes.build_model( processed_X, diabetes_Y.flatten(), 10000) LinRegloaded_diabetes = LinearReg_loaded( diabetes_X, diabetes_Y, features, numberoftraining ) %time LinRegloaded_diabetes.build_model( 10000)
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
006822d8311b5b54695ffcb1040e6b40
ex2 Linear Regression, on d=2 features
data_ex1data2 = pd.read_csv('./coursera_Ng/machine-learning-ex1/ex1/ex1data2.txt', header=None) X_ex1data2 = data_ex1data2.iloc[:,0:2] y_ex1data2 = data_ex1data2.iloc[:,2] m_ex1data2 = y_ex1data2.shape[0] X_ex1data2=X_ex1data2.values.astype(np.float32) y_ex1data2=y_ex1data2.values.reshape((m_ex1data2,1)).astype(np.float32) print(type(X_ex1data2)) print(type(y_ex1data2)) print(X_ex1data2.shape) print(y_ex1data2.shape) print(m_ex1data2) print(X_ex1data2[:5]) print(y_ex1data2[:5]) ((X_ex1data2[:,1] - X_ex1data2[:,1].mean())/( X_ex1data2[:,1].std()) ).std() # feature Normalize #X_ex1data2_norm = sklearn.preprocessing.Normalizer.transform(X_ex1data2 ) X_ex1data2_norm = (X_ex1data2 - np.mean(X_ex1data2, axis=0)) / np.std(X_ex1data2, axis=0) print(X_ex1data2_norm[:,0].mean()) print(X_ex1data2_norm[:,0].std()) print(X_ex1data2_norm[:,1].mean()) print(X_ex1data2_norm[:,1].std()) # X_ex1data2_norm[:5]; X=T.matrix(dtype=theano.config.floatX) y=T.matrix(dtype=theano.config.floatX) Theta=theano.shared(np.zeros((2,1)).astype(theano.config.floatX)) b = theano.shared(np.zeros(1).astype(theano.config.floatX)) print(b.get_value().shape) yhat = T.dot( X, Theta) + b # L2 norm J = np.cast[theano.config.floatX](0.5)*T.mean( T.sqr( yhat-y)) alpha=0.01 # learning rate # sandbox.cuda.basic_ops.gpu_from_host updateThetab = [ Theta-np.float32(alpha)*T.grad(J,Theta), b-np.float32(alpha)*T.grad(J,b)] gradientDescent_step = theano.function(inputs=[X,y], outputs=J, updates = zip([Theta,b],updateThetab) ) num_iters =400 JList=[] for iter in range(num_iters): err = gradientDescent_step(X_ex1data2_norm,y_ex1data2) JList.append(err) # Final mode: print(Theta.get_value()) print(b.get_value()) # JList[-10:] plt.plot(JList) plt.show()
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
e5efe1a2a1c720733a1ae9b0052d15fa
Multi-class Classification cf. ex3, Programming Exercise 3: Multi-class Classification and Neural Networks, Machine Learning 1 Multi-class Classification
os.getcwd() os.listdir( './coursera_Ng/machine-learning-ex3/' ) os.listdir( './coursera_Ng/machine-learning-ex3/ex3' ) # Load saved matrices from file multiclscls_data = scipy.io.loadmat('./coursera_Ng/machine-learning-ex3/ex3/ex3data1.mat')
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
ac566db22d89e57da449a2f09f75bf54
import the classes from ML
import sys import os os.getcwd() #sys.path.append( os.getcwd() + '/ML') sys.path.append( os.getcwd() + '/ML' ) from gradDes import LogReg # Test case for Cost function J_{\theta} with regularization theta_t = np.vstack( np.array( [-2, -1, 1, 2]) ) X_t = np.array( [i/10. for i in range(1,16)]).reshape((3,5)).T #X_t = np.hstack( ( np.ones((5,1)), X_t) ) # no need to preprocess the input data X with column of 1's y_t = np.vstack( np.array( [1,0,1,0,1])) MulClsCls_digits = LogReg( X_t, y_t, 3,5,0.01, 3. ) MulClsCls_digits.calculate_cost() MulClsCls_digits.z.get_value() print( MulClsCls_digits.X.get_value() ) MulClsCls_digits.y.get_value() calc_z_test = theano.function([], MulClsCls_digits.z) calc_z_test() MulClsCls_digits.theta.set_value( theta_t.astype('float32') ) calc_z_test() MulClsCls_digits.calculate_cost() print( 1/(1+np.exp( np.dot( -np.hstack( ( np.ones((5,1)), X_t) ), theta_t) ) ) ) h_test = 1/(1+np.exp( np.dot( -np.hstack( ( np.ones((5,1)), X_t) ), theta_t) ) ) print( np.dot( (h_test - y_t).T, h_test- y_t) * 0.5/5 ) # non-regularized J_theta cost term np.dot( theta_t[1:].T, theta_t[1:]) * 3 / (2.* 5) MulClsCls_digits.predict() MulClsCls_digit theano.config.floatX
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
e109cb9eb1b0d39c776fc65210638f27
Neural Networks Model representation cf. 2 Neural Networks, 2.1 Model representation, ex3.pdf
os.getcwd() os.listdir( './coursera_Ng/machine-learning-ex3/' ) os.listdir( './coursera_Ng/machine-learning-ex3/ex3/' )
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
5fe58b7eaf216b60644c5b5df9eab4fc
$ \Theta_1, \Theta_2 $
# Load saved matrices from file nn3_data = scipy.io.loadmat('./coursera_Ng/machine-learning-ex3/ex3/ex3weights.mat') print( nn3_data.keys() ) print( type( nn3_data['Theta1']) ) print( type( nn3_data['Theta2']) ) print( nn3_data['Theta1'].shape ) print( nn3_data['Theta2'].shape ) Theta1[0]
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
2514e5eebebc1f3ae675e5d73630a750
Feedforward
%load_ext tikzmagic
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
1e0c3ed2659af88eabe16f84a34337c1
$$ \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=3em, column sep=4em, minimum width=2em] { \mathbb{R}^{s_l} & \mathbb{R}^{ s_l +1 } & \mathbb{R}^{s_{l+1} } & \mathbb{R}^{s_{l+1} } \ a^{(l)} & (a_0^{(l)} = 1, a^{(l)} ) & z^{(l+1)} & g(z^{(l+1)}) = a^{(l+1)} \ }; \path[->] (m-1-1) edge node [above] {$a_0^{(l)}=1$} (m-1-2) (m-1-2) edge node [above] {$\Theta^{(l)}$} (m-1-3) (m-1-3) edge node [above] {$g$} (m-1-4) ; \path[|->] (m-2-1) edge node [above] {$a_0^{(l)}=1$} (m-2-2) (m-2-2) edge node [above] {$\Theta^{(l)}$} (m-2-3) (m-2-3) edge node [above] {$g$} (m-2-4) ; \end{tikzpicture} $$
np.random.seed(0) s_l = 400 # (layer) size of layer l, i.e. number of nodes, units in layer l s_lp1 = 25 al = theano.shared( np.random.randn(s_l+1,1).astype('float32'), name="al") #alp1 = theano.shared( np.random.randn(s_lp1,1).astype('float32'), name="al") #Thetal = theano.shared( np.random.randn( s_lp1,s_l+1).astype('float32') , name="Thetal") # Feedforward, forward propagation #z = T.dot( Thetal, al) #g = T.nnet.sigmoid( z) s_l = 25 s_lp1 = 10 rng = np.random.RandomState(99) Theta_values = np.asarray( rng.uniform( low=-np.sqrt( 6. / (s_l+ s_lp1)), high=np.sqrt( 6./(s_l + s_lp1)), size=(s_lp1,s_l+1)), dtype=theano.config.floatX ) print( Theta_values.shape ) print( Theta_values.dtype ) #Theta_values *= np.float32(4) Theta_values *= 4. print( Theta_values.dtype) Theta_values.shape np.float32( 4)
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
1977c4220c0e42e2be5217a34faecc46
From Deep Learning Tutorials of LISA lab of University of Montreal; logistic_sgd.py, mlp.py
%env os.getcwd() print( sys.path ) #sys.path.append( os.getcwd() + '/ML') sys.path.append( '../DeepLearningTutorials/code/' ) #from logistic_sgd import LogisticRegression, load_data, sgd_optimization_mnist, predict import logistic_sgd MNIST_MTLdat = logistic_sgd.load_data("../DeepLearningTutorials/data/mnist.pkl.gz") # list of training data print(len(MNIST_MTLdat)) print(type(MNIST_MTLdat)) for ele in MNIST_MTLdat: print type(ele), len(ele) # test_set_x, test_set_y, valid_set_x, valid_set_y, train_set_x, print( MNIST_MTLdat[0][0].get_value().shape) print( type(MNIST_MTLdat[0][1])) print( MNIST_MTLdat[0][1].get_scalar_constant_value ) print( type( MNIST_MTLdat[1][1] ) ) MNIST_MTLdat[1][1].shape dir(MNIST_MTLdat[0][1]) ; import gzip import six.moves.cPickle as pickle with gzip.open("../DeepLearningTutorials/data/mnist.pkl.gz", 'rb') as f: try: train_set, valid_set, test_set = pickle.load(f, encoding='latin1') except: train_set, valid_set, test_set = pickle.load(f) print( type( train_set[0] )) print( train_set[0].shape ) print( type( train_set[1])) print( train_set[1].shape ) print( type( valid_set[0] )) print( valid_set[0].shape ) print( type( valid_set[1])) print( valid_set[1].shape ) print( type( test_set[0] )) print( test_set[0].shape ) print( type( test_set[1])) print( test_set[1].shape ) X = train_set[0].T pd.DataFrame(X.T).describe() 28*28 X_i = theano.shared( X.astype("float32")) m = X_i.get_value().shape[1] a1 = T.stack( [ theano.shared( np.ones((1,m)).astype("float32") ) , X_i ] , axis=1 ) print( type(a1) ) #print( a1.get_scalar_constant_value() ) dir(a1) a1.get_parents() a1.ndim a1_0 = theano.shared( np.ones((1,m)).astype("float32"),name='a1_0') a1 = T.stack( [a1_0,X_i], axis=0) d = X_i.get_value().shape[0] s_2 = d/2 rng1 = np.random.RandomState(1234) Theta1_values = np.asarray( rng1.uniform( low=-np.sqrt(6./(d+s_2)),high=np.sqrt(6./(d+s_2)),size=(s_2,d+1)), dtype=theano.config.floatX) Theta1 = theano.shared(value=Theta1_values, name="Theta",borrow=True) #rng1.uniform( low=-np.sqrt(6./(d+s_2)),high=np.sqrt(6./(d+s_2)),size=(s_2,d+1)) z1 = T.dot( Theta1, a1) a2 = T.tanh(z1) passthru1 = theano.function( [], a2) print(d) passthru1() print(X.shape) X_i = theano.shared( X.astype("float32")) #m = X_i.get_value().shape[1] m = X.shape[1] print(m) a1_0 = theano.shared( np.ones((1,m)).astype("float32"),name='a1_0') print(a1_0.get_value().shape) a1 = T.stack( [a1_0,X_i], axis=0) addintercept = theano.function([],a1) addintercept() d = X_i.get_value().shape[0] print(d) s_2 = d/2 print(s_2) rng1 = np.random.RandomState(1234) Theta1_values = np.asarray( rng1.uniform( low=-np.sqrt(6./(d+s_2)),high=np.sqrt(6./(d+s_2)),size=(s_2,d)), dtype=theano.config.floatX) Theta1 = theano.shared(value=Theta1_values, name="Theta1",borrow=True) b_values = np.vstack( np.zeros(s_2) ).astype(theano.config.floatX) b1 = theano.shared(value=b_values, name='b1',borrow=True) a1_values=np.array( np.zeros( (d,m)), dtype=theano.config.floatX) a1 = theano.shared(value=a1_values, name='a1', borrow=True) lin_z2 = T.dot( Theta1, a1) + T.tile(b1,(1,m)) #lin_z2 = T.dot( Theta1, a1) test_mult = theano.function([],lin_z2) print( type(b_values)) b_values.dtype test_mult() print( b1.get_value().shape ) T.tile( b1, (0,m))
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
84ac9d41edc6429c1ca61d77cad3fec6
NN.py, load NN.py for Layer class for Neural Net for Multiple Layers
import sys import os #sys.path.append( os.getcwd() + '/ML') sys.path.append( os.getcwd() + '/ML' ) from NN import Layer, cost_functional, cost_functional_noreg, gradientDescent_step
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
69a199ed9820c2092147587ddc87420a
Boilerplate sample data, from Coursera's Machine Learning Introduction
# Load Training Data print("Loading and Visualizing Data ... \n") ex4data1 = scipy.io.loadmat('./coursera_Ng/machine-learning-ex4/ex4/ex4data1.mat') ex4data1.keys() print( ex4data1['X'].shape ) print( ex4data1['y'].shape ) test_rng = np.random.RandomState(1234) #Theta1 = Layer( test_rng, 1, 400,25, 5000) #help(Theta1.al.set_value); # Beginning with Theano 0.3.1, set_value will work in-place on the GPU, if ... source on CPU Theta1.al.set_value( ex4data1['X'].T.astype(theano.config.floatX)) Theta1.alp1 print( type( Theta1.alp1 ) ) Theta2 = Layer( test_rng, 2, 25,10,5000, al=Theta1.alp1 ) Theta2.alp1 predicted = theano.function([],sandbox.cuda.basic_ops.gpu_from_host( Theta2.alp1 ) ) predicted().shape print( ex4data1['y'].shape ) pd.DataFrame( ex4data1['y']).describe() # recall that whereas the original labels (in the variable y) were 1, 2, ..., 10, for the purpose of training a # neural network, we need to recode the labels as vectors containing only values 0 or 1 K=10 m = ex4data1['y'].shape[0] y_prob = [np.zeros(K) for row in ex4data1['y']] # list of 5000 numpy arrays of size dims. (10,) for i in range( m): y_prob[i][ ex4data1['y'][i]-1] = 1 y_prob = np.array(y_prob).T.astype(theano.config.floatX) # size dims. (K,m) print(y_prob.shape) print( type(y_prob) ) type( np.asarray( y_prob, dtype=theano.config.floatX) ) help( T.nlinalg.trace ) y_sh_var = theano.shared( np.asarray( y_prob,dtype=theano.config.floatX),name='y') h_test = Theta2.alp1 J = sandbox.cuda.basic_ops.gpu_from_host( (-T.nlinalg.trace( T.dot( T.log( h_test ), y_sh_var.T)) - T.nlinalg.trace( T.dot( T.log( np.float32(1.)-h_test),(np.float32(1.)- y_sh_var.T ) )))/np.float32(m) ) print(type(J)) test_cost_func = theano.function([],J) test_cost_func() J_test_build = sandbox.cuda.basic_ops.gpu_from_host( -T.nlinalg.trace( T.dot( T.log(h_test),y_sh_var.T) ) ) test_cost_build_func = theano.function([], J_test_build) test_cost_build_func()
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
e944cb97212f0f26e137a97a425b1c64
Sanity check using ex4.m, Exercise 4 or Programming Exercise 4 from Coursera's Machine Learning Introduction by Ng
Theta_testvals = scipy.io.loadmat('./coursera_Ng/machine-learning-ex4/ex4/ex4weights.mat') print( Theta_testvals.keys() ) print( Theta_testvals['Theta1'].shape ) print( Theta_testvals['Theta2'].shape ) Theta1_testval = Theta_testvals['Theta1'][:,1:] b1_testval = Theta_testvals['Theta1'][:,0:1] print( Theta1_testval.shape ) print( b1_testval.shape ) Theta2_testval = Theta_testvals['Theta2'][:,1:] b2_testval = Theta_testvals['Theta2'][:,0:1] print( Theta2_testval.shape ) print( b2_testval.shape ) Theta1 = Layer( test_rng, 1, 400,25, 5000, activation=T.nnet.sigmoid) Theta1.Theta.set_value( Theta1_testval.astype("float32")) Theta1.b.set_value( b1_testval.astype('float32') ) Theta1.al.set_value( ex4data1['X'].T.astype('float32'))
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
3f0230559c28ec85f5ada8d4e9acfc3d
For $\Theta^{(2)}$, the key to connecting $\Theta^{(2)}$ with $\Theta^{(1)}$ is to set the argument in class Layer with al=Theta1.alp1,
Theta2 = Layer( test_rng, 2, 25,10,5000, al=Theta1.alp1 , activation=T.nnet.sigmoid) Theta2.Theta.set_value( Theta2_testval.astype('float32')) Theta2.b.set_value( b2_testval.astype('float32')) h_test = Theta2.alp1 J = sandbox.cuda.basic_ops.gpu_from_host( T.mean( T.sum( - y_sh_var * T.log( h_test ) - ( np.float32( 1) - y_sh_var) * T.log( np.float32(1) - h_test), axis =0), axis=0) ) #J = sandbox.cuda.basic_ops.gpu_from_host( # T.log(h_test) * y_sh_var # ) test_cost_func = theano.function([],J) test_cost_func() print(type( y_sh_var) ) print( y_sh_var.get_value().shape ) print( type( h_test )) checklayer2 = theano.function([], sandbox.cuda.basic_ops.gpu_from_host(Theta1.alp1)) checklayer2() testreg = theano.function([], T.sum( Theta1.Theta * Theta1.Theta ) ) testreg() range(1,3) Thetas_lst = [ Theta1.Theta, Theta2.Theta ] T.sum( [ T.sum( theta*theta) for theta in Thetas_lst] ) cost_func_test = cost_functional(3, 1, y_prob, Theta2.alp1, [Theta1.Theta, Theta2.Theta]) cost_test = theano.function([], cost_func_test) cost_test() # (this value should be about 0.383770) grad_test = T.grad( cost_func_test,[Theta1.Theta, Theta2.Theta]) grad_test_test = theano.function([], grad_test) print( type(grad_test_test() ) ) print( len( grad_test_test() )) print( type(grad_test_test()[0] )) print( grad_test_test()[0].shape ) print( grad_test_test()[1].shape ) print( range(6)) print( list( "Ernest") ) zip( range(6), list("Ernest")) print( type(grad_test)) print( grad_test_test.maker.fgraph.toposort() ) 0.01 * grad_test test_update = [(Theta,sandbox.cuda.basic_ops.gpu_from_host( Theta - np.float32(0.01)*T.grad(cost_func_test, Theta)+0.0001*Theta ) ) for Theta in [Theta1.Theta, Theta2.Theta] ] test_gradDes_step = theano.function( inputs=[], updates= test_update ) test_gradDes_step() print( Theta1.Theta.get_value() ) print( Theta2.Theta.get_value() ) test_gradDes_step() print( Theta1.Theta.get_value() ) print( Theta2.Theta.get_value() ) gradDes_test_res = gradientDescent_step(cost_func_test, [Theta1.Theta, Theta2.Theta], 0.01, 0.00001 ) print( type(gradDes_test_res) ) gradDes_step_test = gradDes_test_res[1] gradDes_step_test() print( Theta1.Theta.get_value() ) print( Theta2.Theta.get_value() ) gradDes_step_test() print( Theta1.Theta.get_value() ) print( Theta2.Theta.get_value() ) y_prob.shape ex4data1['y'].shape pd.DataFrame( ex4data1['y']).describe() print( Theta2.alp1.shape ) print( Theta2.alp1.shape.ndim ) # Theta2.alp1.shape.get_scalar_constant_value() predicted_logreg = theano.function([],Theta2.alp1) pd.DataFrame( predicted_logreg().T ).describe() pd.DataFrame(predicted_logreg().T).describe().iloc[1:-1,:].plot() print( np.argmax( predicted_logreg(), axis=0).shape ) np.vstack( np.argmax( predicted_logreg(),axis=0) ).shape pd.DataFrame( np.vstack( np.argmax(predicted_logreg(),axis=0)) + 1).describe() res = np.float32( ( np.vstack( np.argmax( predicted_logreg(),axis=0)) + 1 ) == ex4data1['y'] ) pd.DataFrame(res).describe() range(1,3) predicted_logreg().shape print(y_prob.shape); print( np.argmax( y_prob,axis=0 ).shape)
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
3391e6f3ef4466c1bcd26a4b120dcda2
Summary for Neural Net with Multiple Layers for logistic regression (but can be extended to linear regression) Load boilerplate training data:
sys.path.append( os.getcwd() + '/ML' ) from NN import Layer, cost_functional, cost_functional_noreg, gradientDescent_step, MLP # Load Training Data print("Loading and Visualizing Data ... \n") ex4data1 = scipy.io.loadmat('./coursera_Ng/machine-learning-ex4/ex4/ex4data1.mat') # recall that whereas the original labels (in the variable y) were 1, 2, ..., 10, for the purpose of training a # neural network, we need to recode the labels as vectors containing only values 0 or 1 K=10 m = ex4data1['y'].shape[0] y_prob = [np.zeros(K) for row in ex4data1['y']] # list of 5000 numpy arrays of size dims. (10,) for i in range( m): y_prob[i][ ex4data1['y'][i]-1] = 1 y_prob = np.array(y_prob).T.astype(theano.config.floatX) # size dims. (K,m) print(ex4data1['X'].T.shape) print(y_prob.shape) digitsMLP = MLP(3,[400,25,10], 5000, ex4data1['X'].T, y_prob, T.nnet.sigmoid, 1., 0.1, 0.0000) digitsMLP.train_model(100000) digitsMLP.accuracy_log_reg() print( digitsMLP.Thetas[0].Theta.get_value() ) digitsMLP.Thetas[1].Theta.get_value() digitsMLP.predicted_vals_logreg() testL1a2 = theano.function([], digitsMLP.Thetas[0].alp1 ) print( testL1a2() ) testL2a2 = theano.function([], digitsMLP.Thetas[1].al ) print( testL2a2() ) [1,2,3,4,5] + [8,1,5] print( digitsMLP.y.shape ) y_cls_test = np.vstack( np.argmax( digitsMLP.y, axis=0) ) print( y_cls_test.shape ) pd.DataFrame( y_cls_test ).describe() pred_y_cls_test = np.vstack( np.argmax( digitsMLP.predicted_vals_logreg() , axis=0)) print( pred_y_cls_test.shape ) pd.DataFrame( pred_y_cls_test ).describe() np.mean( pred_y_cls_test == y_cls_test )
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
dada1171043a210c57ad16475df0e606
Testing on MNIST, from University of Montreal, Deep Learning Tutorial, data
K=10 m = len(train_set[1]) y_train_prob = [np.zeros(K) for row in train_set[1]] # list of 5000 numpy arrays of size dims. (10,) for i in range( m): y_train_prob[i][ train_set[1][i]] = 1 y_train_prob = np.array(y_train_prob).T.astype(theano.config.floatX) # size dims. (K,m) print( y_train_prob.shape ) print( pd.DataFrame( y_train_prob).describe() ) m,d= train_set[0].shape MNIST_MTL = MLP(3,[d,25,10], m, train_set[0].T, y_train_prob, T.nnet.sigmoid, 1., 0.1, 0.00001) MNIST_MTL.accuracy_log_reg() print( MNIST_MTL.Thetas[0].Theta.get_value() ) MNIST_MTL.Thetas[1].Theta.get_value() MNIST_MTL.predicted_vals_logreg() MNIST_MTL.train_model(100000) MNIST_MTL.accuracy_log_reg() print( MNIST_MTL.Thetas[0].Theta.get_value() ) MNIST_MTL.Thetas[1].Theta.get_value() MNIST_MTL.predicted_vals_logreg()
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
6dd7414924d89218f4536744bdf7b932
Save the mode; cf. Getting Started, DeepLearning 0.1 documentation, Loading and Saving Models
import cPickle save_file = open('./saved_models/MNIST_MTL_log_reg','wb') for Thet in MNIST_MTL.Thetas: cPickle.dump( Thet.Theta.get_value(borrow=True), save_file,-1) # the -1 is for HIGHEST priority cPickle.dump( Thet.b.get_value(borrow=True), save_file,-1) save_file.close() MNIST_MTL.Thetas[0].al.set_value( valid_set[0].T.astype(theano.config.floatX) ) K=10 m = len(valid_set[1]) y_valid_prob = [np.zeros(K) for row in valid_set[1]] # list of 5000 numpy arrays of size dims. (10,) for i in range( m): y_valid_prob[i][ valid_set[1][i]] = 1 y_valid_prob = np.array(y_valid_prob).T.astype(theano.config.floatX) # size dims. (K,m) print( y_valid_prob.shape ) MNIST_MTL.y = y_valid_prob MNIST_MTL.predicted_vals_logreg() theano.function([], MNIST_MTL.Thetas[0].alp1)() Layer1 = MNIST_MTL.Thetas[0] Layer2 = MNIST_MTL.Thetas[1] m = valid_set[0].shape[0] print(m) a2 = T.nnet.sigmoid( T.dot( Layer1.Theta, Layer1.al) + T.tile( Layer1.b, (1,m)) ) a3 = T.nnet.sigmoid( T.dot( Layer2.Theta, a2) + T.tile( Layer2.b, (1,m)) ) valid_pred = theano.function([], a3)() print( valid_pred.shape) pd.DataFrame( valid_pred.T).describe() np.mean( np.vstack( np.argmax( valid_pred,axis=0)) == np.vstack( valid_set[1] ) ) X_in = T.matrix() X_in.set_value( valid_set[0].T.astype(theano.config.floatX)) a2_giv = T.nnet.sigmoid( T.dot( Layer1.Theta, X_in) + T.tile(Layer1.b, (1,m))) a3_giv = T.nnet.sigmoid( T.dot( Layer2.Theta, a2_giv) + T.tile( Layer2.b, (1,m)) ) valid_pred_givens = theano.function([], outputs=a3_giv, givens={ X_in: valid_set[0].T.astype(theano.config.floatX)} ) print( valid_pred_givens().shape ) pd.DataFrame( valid_pred_givens().T).describe() np.mean( np.vstack( np.argmax( valid_pred_givens(),axis=0)) == np.vstack( valid_set[1] ) ) test_pred_givens = theano.function([], outputs=a3_giv, givens={ X_in: test_set[0].T.astype(theano.config.floatX)} ) np.mean( np.vstack( np.argmax( test_pred_givens(),axis=0)) == np.vstack( test_set[1] ) ) range(1,3) range(3) range(1,3-1)
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
d8efd6cd2882d3c036afae8c80c045e1
cf. Glass Classification
gls_data = pd.read_csv( "./kaggle/glass.csv") gls_data.describe() gls_data.get_values().shape X_gls = gls_data.get_values()[:,:-1] print(X_gls.shape) y_gls = gls_data.get_values()[:,-1] print(y_gls.shape) print( y_gls[:10]) X_gls_train = gls_data.get_values()[:-14,:-1] print(X_gls_train.shape) y_gls_train = gls_data.get_values()[:-14,-1] print(y_gls_train.shape) K=7 m = len(y_gls_train) y_gls_train_prob = [np.zeros(K) for row in y_gls_train] # list of 5000 numpy arrays of size dims. (10,) for i in range( m): y_gls_train_prob[i][ y_gls_train[i]-1] = 1 y_gls_train_prob = np.array(y_gls_train_prob).T.astype(theano.config.floatX) # size dims. (K,m) print( y_gls_train_prob.shape ) gls_MLP = MLP( 3, [9,8,7],200, X_gls_train.T, y_gls_train_prob, T.nnet.sigmoid, 0.01,0.05,0.0001 ) gls_MLP.accuracy_log_reg() gls_MLP.train_model(10000) gls_MLP.accuracy_log_reg() gls_MLP.predicted_vals_logreg() gls_MLP.train_model(10000) gls_MLP.accuracy_log_reg() ga X_gls_test = gls_data.get_values()[-14:,:-1] print( X_gls_test.shape ) y_gls_test = gls_data.get_values()[-14:,-1] print( y_gls_test.shape) gls_predict_on_test = gls_MLP.predict_on( 14, X_gls_test.T ) np.mean( np.vstack( np.argmax( gls_predict_on_test(), axis=0) ) == (y_gls_test-1) ) y_gls_test np.vstack( np.argmax( gls_predict_on_test(), axis=0)) X_sym = T.matrix() rng = np.random.RandomState(1234) Thetab1 = Layer( rng, 1, 4,3,2, al = X_sym, activation=T.nnet.sigmoid) Thetab1.alp1 Thetab1.Theta.get_value().shape Thetab2 = Layer( rng, 2, 3,2,2, al=Thetab1.alp1, activation=T.nnet.sigmoid) Thetab2.al = Thetab1.alp1 X_sym.shape[0] T.tile( Thetab1.b, (1, X_sym.shape[0])) test12comp = theano.function( [], outputs=Thetab2.alp1, givens={ X_sym : X42test} ) X42test = np.array([1,2,3,4,5,6,7,8]).reshape((4,2)).astype(theano.config.floatX) test12comp() X43test = np.array(range(1,13)).reshape((4,3)).astype(theano.config.floatX) X43test test43comp = theano.function( [], outputs=Thetab2.alp1, givens={ X_sym : X43test} ) test43comp() print( type(Thetab1.al )) lin_zlp1 = T.dot(Thetab1.Theta, Thetab1.al)+T.tile( Thetab1.b, (1,Thetab1.al.shape[1])) a1p1 = Thetab1.g( lin_zlp1 ) Thetab1.al = X_sym Thetab2.al = a1p1 lin_z2p1 = T.dot(Thetab2.Theta, Thetab2.al)+T.tile( Thetab2.b, (1, Thetab2.al.shape[1])) a2p1 = Thetab2.g( lin_z2p1 ) test_gen_conn = theano.function([], outputs=a2p1, givens={ Thetab1.al : X42test }) test_gen_conn() test_gen_conn = theano.function([], outputs=a2p1, givens={ Thetab1.al : X43test }) test_gen_conn()
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
a760ef0056ac6e7b0b6dd67595b32ec5
GPU test
test_gen_conn = theano.function([], outputs=sandbox.cuda.basic_ops.gpu_from_host(a2p1), givens={ Thetab1.al : X42test }) test_gen_conn() test_gen_conn = theano.function([], outputs=sandbox.cuda.basic_ops.gpu_from_host(a2p1), givens={ Thetab1.al : X43test }) test_gen_conn()
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
495c11974b1428193638d753ae0b45b3
Summary for Neural Net with Multiple Layers for logistic regression (but can be extended to linear regression)
sys.path.append( os.getcwd() + '/ML' ) from NN import MLP # Load Training Data print("Loading and Visualizing Data ... \n") ex4data1 = scipy.io.loadmat('./coursera_Ng/machine-learning-ex4/ex4/ex4data1.mat') # recall that whereas the original labels (in the variable y) were 1, 2, ..., 10, for the purpose of training a # neural network, we need to recode the labels as vectors containing only values 0 or 1 K=10 m = ex4data1['y'].shape[0] y_prob = [np.zeros(K) for row in ex4data1['y']] # list of 5000 numpy arrays of size dims. (10,) for i in range( m): y_prob[i][ ex4data1['y'][i]-1] = 1 y_prob = np.array(y_prob).T.astype(theano.config.floatX) # size dims. (K,m) print(ex4data1['X'].T.shape) print(y_prob.shape) digitsMLP = MLP( 3, [400,25,10], ex4data1['X'].T, y_prob, T.nnet.sigmoid, 1.) digitsMLP.build_update(ex4data1['X'].T, y_prob, 0.01, 0.00001) digitsMLP.predicted_vals_logreg() digitsMLP.accuracy_logreg( ex4data1['X'].T, y_prob) digitsMLP.train_model(10000) digitsMLP.accuracy_logreg( ex4data1['X'].T, y_prob) digitsMLP.train_model(50000) digitsMLP.accuracy_logreg( ex4data1['X'].T, y_prob)
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
45d050f73782a18571b0b506c93a1783
Testing on University of Montreal LISA lab MNIST data
import gzip import six.moves.cPickle as pickle with gzip.open("../DeepLearningTutorials/data/mnist.pkl.gz", 'rb') as f: try: train_set, valid_set, test_set = pickle.load(f, encoding='latin1') except: train_set, valid_set, test_set = pickle.load(f) K=10 m = len(train_set[1]) y_train_prob = [np.zeros(K) for row in train_set[1]] # list of 5000 numpy arrays of size dims. (10,) for i in range( m): y_train_prob[i][ train_set[1][i]] = 1 y_train_prob = np.array(y_train_prob).T.astype(theano.config.floatX) # size dims. (K,m) print( y_train_prob.shape ) MNIST_MLP = MLP( 3,[784,49,10], train_set[0].T, y_train_prob, T.nnet.sigmoid, 1.) MNIST_MLP.build_update( train_set[0].T, y_train_prob, 0.01, 0.0001) MNIST_MLP.accuracy_logreg( train_set[0].T, y_train_prob) MNIST_MLP.train_model(50000) MNIST_MLP.accuracy_logreg( train_set[0].T, y_train_prob) %time MNIST_MLP.train_model(100000) MNIST_MLP.accuracy_logreg( train_set[0].T,y_train_prob) m = len(valid_set[1]) y_valid_prob = [np.zeros(K) for row in valid_set[1]] # list of 5000 numpy arrays of size dims. (10,) for i in range( m): y_valid_prob[i][ valid_set[1][i]] = 1 y_valid_prob = np.array(y_valid_prob).T.astype(theano.config.floatX) # size dims. (K,m) print( y_valid_prob.shape ) m = len(test_set[1]) y_test_prob = [np.zeros(K) for row in test_set[1]] # list of 5000 numpy arrays of size dims. (10,) for i in range( m): y_test_prob[i][ test_set[1][i]] = 1 y_test_prob = np.array(y_test_prob).T.astype(theano.config.floatX) # size dims. (K,m) print( y_test_prob.shape ) MNIST_MLP.accuracy_logreg( valid_set[0].T,y_valid_prob) MNIST_MLP.accuracy_logreg( test_set[0].T,y_test_prob) MNIST_d = train_set[0].T.shape[0] print(MNIST_d) MNIST_MLP = MLP( 3,[MNIST_d,25,10], train_set[0].T, y_train_prob, T.nnet.sigmoid, 1.) MNIST_MLP.build_update( train_set[0].T, y_train_prob, 0.1, 0.00001) MNIST_MLP.accuracy_logreg( train_set[0].T, y_train_prob) MNIST_MLP.train_model(150000) MNIST_MLP.accuracy_logreg( train_set[0].T, y_train_prob) MNIST_MLP.accuracy_logreg( valid_set[0].T, y_valid_prob) MNIST_MLP.accuracy_logreg( test_set[0].T, y_test_prob)
theano_ML.ipynb
ernestyalumni/MLgrabbag
mit
dd208193efbf1d152a6474cc4370abe0
Reading in markers, Calculating decompressed length: We use the (very awesome) itertools module to do the iterating and filtering for us. We use an iterator to go over the input values, so that we can use the itertools functions such as takewhile that selects characters as long as a condition is fulfilled. def takewhile(condition, data): filtered_data = [] for item in data: if condition(data): filtered_data.append(item) else: break return filtered_data We use takewhile to swallow characters until we reach the markers, and then to get the marker itself. Using regular expressions, we extract the two values from the marker. Since we are using iterators, we need to skip the next A characters, which we do using a for loop. At the end, the answer is in the count variable.
from itertools import islice, takewhile import re numbers = re.compile(r'(\d+)') def decompress(data_iterator): '''parses markers and returns index of last character and length of decompressed data''' count = 0 index = 0 while True: # handle single tokens that decompress to length 1 until start of marker count += len(list(takewhile(lambda character: character != '(', data_iterator))) # extract marker marker = ''.join(takewhile(lambda character: character != ')', data_iterator)) # extract A and B try: a, b = map(int, numbers.findall(marker)) except ValueError: # EOF or no other markers present break # skip the next a characters for i in range(a): next(data_iterator) # increment count count += a * b return count print(decompress(iter(data)))
2016/python3/Day09.ipynb
coolharsh55/advent-of-code
mit
ceedef374b4ed78981ab0da3dcff2264
Part Two Apparently, the file actually uses version two of the format. In version two, the only difference is that markers within decompressed data are decompressed. This, the documentation explains, provides much more substantial compression capabilities, allowing many-gigabyte files to be stored in only a few kilobytes. For example: (3x3)XYZ still becomes XYZXYZXYZ, as the decompressed section contains no markers. X(8x2)(3x3)ABCY becomes XABCABCABCABCABCABCY, because the decompressed data from the (8x2) marker is then further decompressed, thus triggering the (3x3) marker twice for a total of six ABC sequences. (27x12)(20x12)(13x14)(7x10)(1x12)A decompresses into a string of A repeated 241920 times. (25x3)(3x3)ABC(2x3)XY(5x2)PQRSTX(18x9)(3x2)TWO(5x7)SEVEN becomes 445 characters long. Unfortunately, the computer you brought probably doesn't have enough memory to actually decompress the file; you'll have to come up with another way to get its decompressed length. What is the decompressed length of the file using this improved format? Solution logic In this part, we need to keep track of the markers within the skipped marked from part one. As an assumption, we take the approach that no internal marker will extend the limits of the external marker. If it does, we will need to take a different approach to scan the string over and over again. Instead, we use a recursive approach to parse the string by marker and return the correct length. X(8x2)(3x3)ABCY (8x2) --&gt; 8 characters: (3x3)ABC multiplied by 2 --&gt; 2 x decompressed (3x3)ABC --&gt; 2 x 3 x ABC For this, we extend the decompress function so that it will return the length of the string plus recursively scan any marker within it and return the final index of the last character scanned. This is the same function as in part one, except that it recusively checks for markers. The recursive part of this approach is to further decompress the string (or characters) that were skipped in the first part. For this, we use islice to extract part of the string specified by the markers and recursively call the function on it to get its decompressed length.
def decompress(data_iterator): count = 0 '''parses markers and returns index of last character and length of decompressed data''' while(True): # handle all single characters count += len(list(takewhile(lambda character: character != '(', data_iterator))) # marker occurs here, extract marker marker = ''.join(takewhile(lambda character: character != ')', data_iterator)) # extract A and B try: a, b = map(int, numbers.findall(marker)) except ValueError: break count += b * decompress(islice(data_iterator, a)) return count print(decompress(iter(data)))
2016/python3/Day09.ipynb
coolharsh55/advent-of-code
mit
4cacb1b6d375483beb75c0078790a08b