markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
hash
stringlengths
32
32
View Image
# Show image plt.imshow(image_enhanced, cmap='gray'), plt.axis("off") plt.show()
machine-learning/enhance_contrast_of_greyscale_image.ipynb
tpin3694/tpin3694.github.io
mit
9fb6183147ac1c2a021d4163167bd07f
If you don't have the image viewing tool ds9, you should install it - it's very useful astronomical software. You can download it (later!) from this webpage. We can also display the image in the notebook:
plt.imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower'); plt.savefig("figures/cluster_image.png")
examples/XrayImage/FirstLook.ipynb
enoordeh/StatisticalMethods
gpl-2.0
84c281bcce17e5530375e9aedd7ba96b
Exercise What is going on in this image? Make a list of everything that is interesting about this image with your neighbor, and we'll discuss the features you identify in about 5 minutes time. Just to prove that images really are arrays of numbers:
im[350:359,350:359] index = np.unravel_index(im.argmax(), im.shape) print("image dimensions:",im.shape) print("location of maximum pixel value:",index) print("maximum pixel value: ",im[index])
examples/XrayImage/FirstLook.ipynb
enoordeh/StatisticalMethods
gpl-2.0
6efb34527a18f052c52735d66b16e4bb
A full adder has three single bit inputs, and returns the sum and the carry. The sum is the exclusive or of the 3 bits, the carry is 1 if any two of the inputs bits are 1. Here is a schematic of a full adder circuit (from logisim). <img src="images/full_adder_logisim.png" width="500"/> We start by defining a magma combinational function that implements a full adder. The full adder function takes three single bit inputs (type m.Bit) and returns two single bit outputs as a tuple. The first element of tuple is the sum, the second element is the carry. Note that the arguments and return values of the functions have type annotations using Python 3's typing syntax. We compute the sum and carry using standard Python bitwise operators &amp;, |, and ^.
@m.circuit.combinational def full_adder(A: m.Bit, B: m.Bit, C: m.Bit) -> (m.Bit, m.Bit): return A ^ B ^ C, A & B | B & C | C & A # sum, carry
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
0a860b89f0a18f62ad0a3694943ebfa0
We can test our combinational function to verify that our implementation behaves as expected fault. We'll use the fault.PythonTester which will simulate the circuit using magma's Python simulator.
import fault tester = fault.PythonTester(full_adder) assert tester(1, 0, 0) == (1, 0), "Failed" assert tester(0, 1, 0) == (1, 0), "Failed" assert tester(1, 1, 0) == (0, 1), "Failed" assert tester(1, 0, 1) == (0, 1), "Failed" assert tester(1, 1, 1) == (1, 1), "Failed" print("Success!")
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
d5c58bcb134d421ed7f3ba148a5f50a9
combinational functions are polymorphic over Python and magma types. If the function is called with magma values, it will produce a circuit instance, wire up the inputs, and return references to the outputs. Otherwise, it will invoke the function in Python. For example, we can use the Python function to verify the circuit simulation.
assert tester(1, 0, 0) == full_adder(1, 0, 0), "Failed" assert tester(0, 1, 0) == full_adder(0, 1, 0), "Failed" assert tester(1, 1, 0) == full_adder(1, 1, 0), "Failed" assert tester(1, 0, 1) == full_adder(1, 0, 1), "Failed" assert tester(1, 1, 1) == full_adder(1, 1, 1), "Failed" print("Success!")
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
5abbd2d1c4e9925c840ae4eb50b70017
Circuits Now that we have an implementation of full_adder as a combinational function, we'll use it to construct a magma Circuit. A Circuit in magma corresponds to a module in verilog. This example shows using the combinational function inside a circuit definition, as opposed to using the Python implementation shown before.
class FullAdder(m.Circuit): io = m.IO(I0=m.In(m.Bit), I1=m.In(m.Bit), CIN=m.In(m.Bit), O=m.Out(m.Bit), COUT=m.Out(m.Bit)) O, COUT = full_adder(io.I0, io.I1, io.CIN) io.O @= O io.COUT @= COUT
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
bb93b76721fde2fab3375397f012030d
First, notice that the FullAdder is a subclass of Circuit. All magma circuits are classes in python. Second, the function IO creates the interface to the circuit. The arguments toIO are keyword arguments. The key is the name of the argument in the circuit, and the value is its type. In this circuit, all the inputs and outputs have Magma type Bit. We also qualify each type as an input or an output using the functions In and Out. Note that when we call the python function fulladder it is passed magma values not standard python values. In the previous cell, we tested fulladder with standard python ints, while in this case, the values passed to the Python fulladder function are magma values of type Bit. The Python bitwise operators for Magma types are overloaded to automatically create subcircuits to compute logical functions. fulladder returns two values. These values are assigned to the python variables O and COUT. Remember that assigning to a Python variable sets the variable to refer to the object. magma values are Python objects, so assigning an object to a variable creates a reference to that magma value. In order to complete the definition of the circuit, O and COUT need to be wired to the outputs in the interface. The python @= operator is overloaded to perform wiring. Let's inspect the circuit definition by printing the __repr__.
print(repr(FullAdder))
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
ccda67d47e3f10e4d7b7f64ba258200e
We see that it has created an instance of the full_adder combinational function and wired up the interface. We can also inspect the contents of the full_adder circuit definition. Notice that it has lowered the Python operators into a structural representation of the primitive logicoperations.
print(repr(full_adder.circuit_definition))
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
1cf85f6cdedd92b1206e19adde819323
We can also inspect the code generated by the m.circuit.combinational decorator by looking in the .magma directory for a file named .magma/full_adder.py. When using m.circuit.combinational, magma will generate a file matching the name of the decorated function. You'll notice that the generated code introduces an extra temporary variable (this is an artifact of the SSA pass that magma runs to handle if/else statements).
with open(".magma/full_adder.py") as f: print(f.read())
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
e4ab494e3c05c22b261d43ce3b5edb1c
In the code above, a mux is imported and named phi. If the combinational circuit contains any if-then-else constructs, they will be transformed into muxes. Note also the m.wire function. m.wire(O0, io.I0) is equivalent to io.O0 @= O0. Staged testing with Fault fault is a python package for testing magma circuits. By default, fault is quiet, so we begin by enabling logging using the built-in logging module
import logging logging.basicConfig(level=logging.INFO) import fault
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
6f15c157abc0e441e495d796550c2e8e
Earlier in the notebook, we showed an example using fault.PythonTester to simulate a circuit. This uses an interactive programming model where test actions are immediately dispatched to the underlying simulator (which is why we can perform assertions on the simulation values in Python. fault also provides a staged metaprogramming environment built upon the Tester class. Using the staged environment means values are not returned immediately to Python. Instead, the Python test code records a sequence of actions that are compiled and run in a later stage. A Tester is instantiated with a magma circuit.
tester = fault.Tester(FullAdder)
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
9b48fac5d30bcbdcd583a4660c2228eb
An instance of a Tester has an attribute .circuit that enables the user to record test actions. For example, inputs to a circuit can be poked by setting the attribute corresponding to the input port name.
tester.circuit.I0 = 1 tester.circuit.I1 = 1 tester.circuit.CIN = 1
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
a7ab59bb6d679a36d679401dc099a4ae
fault's default Tester provides the semantics of a cycle accurate simulator, so, unlike verilog, pokes do not create events that trigger computation. Instead, these poke values are staged, and the propogation of their effect occurs when the user calls the eval action.
tester.eval()
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
d2277ba3c1e7350029da9ea235c5f9c0
To assert that the output of the circuit is equal to a value, we use the expect method that are defined on the attributes corresponding to circuit output ports
tester.circuit.O.expect(1) tester.circuit.COUT.expect(1)
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
7fad5b10ee57bd2f62b7f49cd0bafdb0
Because fault is a staged programming environment, the above actions are not executed until we have advanced to the next stage. In the first stage, the user records test actions (e.g. poke, eval, expect). In the second stage, the test is compiled and run using a target runtime. Here's examples of running the test using magma's python simulator, the coreir c++ simulator, and verilator.
# compile_and_run throws an exception if the test fails tester.compile_and_run("verilator")
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
5b58742b8b7dbe0f74998609455d69bf
The tester also provides the same convenient __call__ interface we saw before.
O, COUT = tester(1, 0, 0) tester.expect(O, 1) tester.expect(COUT, 0) tester.compile_and_run("verilator")
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
1f9225b0fa7784dc52538dcb35dfa25e
Generate Verilog Magma's default compiler will generate verilog using CoreIR
m.compile("build/FullAdder", FullAdder, inline=True) %cat build/FullAdder.v
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
92f7a3a612a01e67c1220c59aabe7b32
Generate CoreIR We can also inspect the intermediate CoreIR used in the generation process.
%cat build/FullAdder.json
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
45c005c815977b64a6fe65dfe9aa8b46
Here's an example of running a CoreIR pass on the intermediate representation.
!coreir -i build/FullAdder.json -p instancecount
notebooks/tutorial/coreir/FullAdder.ipynb
phanrahan/magmathon
mit
a37c9fa639a0422bce1443ac798ef2cf
При увеличении порога мы делаем меньше ошибок FP и больше ошибок FN, поэтому одна из кривых растет, а вторая - падает. По такому графику можно подобрать оптимальное значение порога, при котором precision и recall будут приемлемы. Если такого порога не нашлось, нужно обучать другой алгоритм. Оговоримся, что приемлемые значения precision и recall определяются предметной областью. Например, в задаче определения, болен ли пациент определенной болезнью (0 - здоров, 1 - болен), ошибок false negative стараются избегать, требуя recall около 0.9. Можно сказать человеку, что он болен, и при дальнейшей диагностике выявить ошибку; гораздо хуже пропустить наличие болезни. <font color="green" size=5>Programming assignment: problem 1. </font> Фиксируем порог T = 0.65; по графикам можно примерно узнать, чему равны метрики на трех выбранных парах векторов (actual, predicted). Вычислите точные precision и recall для этих трех пар векторов. 6 полученных чисел запишите в текстовый файл в таком порядке: precision_1 recall_1 precision_10 recall_10 precision_11 recall_11 Цифры XXX после пробела соответствуют таким же цифрам в названиях переменных actual_XXX и predicted_XXX. Передайте ответ в функцию write_answer_1. Полученный файл загрузите в форму.
############### Programming assignment: problem 1 ############### T = 0.65 for _actual, _predicted in zip([actual_1, actual_10, actual_11], [predicted_1, predicted_10, predicted_11]): print('Precision: %s' % precision_score(_actual, _predicted > T)) print('Recall: %s\n' % recall_score(_actual, _predicted > T)) def write_answer_1(precision_1, recall_1, precision_10, recall_10, precision_11, recall_11): answers = [precision_1, recall_1, precision_10, recall_10, precision_11, recall_11] with open("pa_metrics_problem1.txt", "w") as fout: fout.write(" ".join([str(num) for num in answers])) write_answer_1(1.0, 0.466666666667, 1.0, 0.133333333333, 0.647058823529, 0.846153846154)
Coursera/Machine-learning-data-analysis/Course 2/Week_02/MetricsPA.ipynb
ALEXKIRNAS/DataScience
mit
d5a70f645f8b21e88292210a67510200
F1-метрика в двух последних случаях, когда одна из парных метрик равна 1, значительно меньше, чем в первом, сбалансированном случае. <font color="green" size=5>Programming assignment: problem 2. </font> На precision и recall влияют и характер вектора вероятностей, и установленный порог. Для тех же пар (actual, predicted), что и в предыдущей задаче, найдите оптимальные пороги, максимизирующие F1_score. Будем рассматривать только пороги вида T = 0.1 * k, k - целое; соответственно, нужно найти три значения k. Если f1 максимизируется при нескольких значениях k, укажите наименьшее из них. Запишите найденные числа k в следующем порядке: k_1, k_10, k_11 Цифры XXX после пробела соответствуют таким же цифрам в названиях переменных actual_XXX и predicted_XXX. Передайте ответ в функцию write_answer_2. Загрузите файл в форму. Если вы запишите список из трех найденных k в том же порядке в переменную ks, то с помощью кода ниже можно визуализировать найденные пороги:
############### Programming assignment: problem 2 ############### ks = np.zeros(3) idexes = np.empty(3) for threshold in np.arange(11): T = threshold * 0.1 for actual, predicted, idx in zip([actual_1, actual_10, actual_11], [predicted_1 > T, predicted_10 > T, predicted_11 > T], [0, 1, 2]): score = f1_score(actual, predicted) if score > ks[idx]: ks[idx] = score idexes[idx] = threshold print(ks) print(idexes) ks = idexes many_scatters([actual_1, actual_10, actual_11], [predicted_1, predicted_10, predicted_11], np.array(ks)*0.1, ["Typical", "Avoids FP", "Avoids FN"], (1, 3)) def write_answer_2(k_1, k_10, k_11): answers = [k_1, k_10, k_11] with open("pa_metrics_problem2.txt", "w") as fout: fout.write(" ".join([str(num) for num in answers])) write_answer_2(5, 3, 6)
Coursera/Machine-learning-data-analysis/Course 2/Week_02/MetricsPA.ipynb
ALEXKIRNAS/DataScience
mit
c070a7b7fa0136c2e06b713eb87ab99d
Как и предыдущие метрики, log_loss хорошо различает идеальный, типичный и плохой случаи. Но обратите внимание, что интерпретировать величину достаточно сложно: метрика не достигает нуля никогда и не имеет верхней границы. Поэтому даже для идеального алгоритма, если смотреть только на одно значение log_loss, невозможно понять, что он идеальный. Но зато эта метрика различает осторожный и рискующий алгоритмы. Как мы видели выше, в случаях Typical careful и Typical risky количество ошибок при бинаризации по T = 0.5 примерно одинаковое, в случаях Ideal ошибок вообще нет. Однако за неудачно угаданные классы в Typical рискующему алгоритму приходится платить большим увеличением log_loss, чем осторожному алгоритму. С другой стороны, за удачно угаданные классы рискованный идеальный алгоритм получает меньший log_loss, чем осторожный идеальный алгоритм. Таким образом, log_loss чувствителен и к вероятностям, близким к 0 и 1, и к вероятностям, близким к 0.5. Ошибки FP и FN обычный Log_loss различать не умеет. Однако нетрудно сделать обобщение log_loss на случай, когда нужно больше штрафовать FP или FN: для этого достаточно добавить выпуклую (то есть неотрицательную и суммирующуюся к единице) комбинацию из двух коэффициентов к слагаемым правдоподобия. Например, давайте штрафовать false positive: $weighted_log_loss(actual, predicted) = -\frac 1 n \sum_{i=1}^n (0.3\, \cdot actual_i \cdot \log (predicted_i) + 0.7\,\cdot (1-actual_i)\cdot \log (1-predicted_i))$ Если алгоритм неверно предсказывает большую вероятность первому классу, то есть объект на самом деле принадлежит классу 0, то первое слагаемое в скобках равно нулю, а второе учитывается с большим весом. <font color="green" size=5>Programming assignment: problem 3. </font> Напишите функцию, которая берет на вход векторы actual и predicted и возвращает модифицированный Log-Loss, вычисленный по формуле выше. Вычислите ее значение (обозначим его wll) на тех же векторах, на которых мы вычисляли обычный log_loss, и запишите в файл в следующем порядке: wll_0 wll_1 wll_2 wll_0r wll_1r wll_10 wll_11 Цифры XXX после пробела соответствуют таким же цифрам в названиях переменных actual_XXX и predicted_XXX. Передайте ответ в функцию write_answer3. Загрузите файл в форму.
############### Programming assignment: problem 3 ############## ans = [] def modified_log(actual, predicted): return - np.sum(0.3 * actual * np.log(predicted) + 0.7 * (1 - actual) * np.log(1 - predicted)) / len(actual) for _actual, _predicted in zip([actual_0, actual_1, actual_2, actual_0r, actual_1r, actual_10, actual_11], [predicted_0, predicted_1, predicted_2, predicted_0r, predicted_1r, predicted_10, predicted_11]): print(modified_log(_actual, _predicted), log_loss(_actual, _predicted)) ans.append(modified_log(_actual, _predicted)) def write_answer_3(ans): answers = ans with open("pa_metrics_problem3.txt", "w") as fout: fout.write(" ".join([str(num) for num in answers])) write_answer_3(ans)
Coursera/Machine-learning-data-analysis/Course 2/Week_02/MetricsPA.ipynb
ALEXKIRNAS/DataScience
mit
d76a451e500b5ef7a9bf4f18441208e0
Чем больше объектов в выборке, тем более гладкой выглядит кривая (хотя на самом деле она все равно ступенчатая). Как и ожидалось, кривые всех идеальных алгоритмов проходят через левый верхний угол. На первом графике также показана типичная ROC-кривая (обычно на практике они не доходят до "идеального" угла). AUC рискующего алгоритма значительном меньше, чем у осторожного, хотя осторожный и рискущий идеальные алгоритмы не различаются по ROC или AUC. Поэтому стремиться увеличить зазор между интервалами вероятностей классов смысла не имеет. Наблюдается перекос кривой в случае, когда алгоритму свойственны ошибки FP или FN. Однако по величине AUC это отследить невозможно (кривые могут быть симметричны относительно диагонали (0, 1)-(1, 0)). После того, как кривая построена, удобно выбирать порог бинаризации, в котором будет достигнут компромисс между FP или FN. Порог соответствует точке на кривой. Если мы хотим избежать ошибок FP, нужно выбирать точку на левой стороне квадрата (как можно выше), если FN - точку на верхней стороне квадрата (как можно левее). Все промежуточные точки будут соответствовать разным пропорциям FP и FN. <font color="green" size=5>Programming assignment: problem 4. </font> На каждой кривой найдите точку, которая ближе всего к левому верхнему углу (ближе в смысле обычного евклидова расстояния), этой точке соответствует некоторый порог бинаризации. Запишите в выходной файл пороги в следующем порядке: T_0 T_1 T_2 T_0r T_1r T_10 T_11 Цифры XXX после пробела соответствуют таким же цифрам в названиях переменных actual_XXX и predicted_XXX. Если порогов, минимизирующих расстояние, несколько, выберите наибольший. Передайте ответ в функцию write_answer_4. Загрузите файл в форму. Пояснение: функция roc_curve возвращает три значения: FPR (массив абсции точек ROC-кривой), TPR (массив ординат точек ROC-кривой) и thresholds (массив порогов, соответствующих точкам). Рекомендуем отрисовывать найденную точку на графике с помощью функции plt.scatter.
############### Programming assignment: problem 4 ############### ans = [] for actual, predicted in zip([actual_0, actual_1, actual_2, actual_0r, actual_1r, actual_10, actual_11], [predicted_0, predicted_1, predicted_2, predicted_0r, predicted_1r, predicted_10, predicted_11]): fpr, tpr, thr = roc_curve(actual, predicted) dist = np.sqrt(np.square(-fpr) + np.square(1 - tpr)) idx = np.argmin(dist) print(thr[idx]) ans.append(thr[idx]) def write_answer_4(ans): answers = ans with open("pa_metrics_problem4.txt", "w") as fout: fout.write(" ".join([str(num) for num in answers])) write_answer_4(ans)
Coursera/Machine-learning-data-analysis/Course 2/Week_02/MetricsPA.ipynb
ALEXKIRNAS/DataScience
mit
82481a5cc54526441ba3073d702bec1e
Exercise 1. a. $\Omega$ will be all the possible combinations we have for 150 object two have two diffent values. For example (0, 0, ..., 0), (1, 0, ..., 0), (0, 1, ..., 0), ... (1, 1, ..., 0), ... (1, 1, ..., 1). This sample space has size of $2^{150}$. The random variable $X(\omega)$ will be the number of defective objects there are in the sample $\omega$. We can also define $Y(\omega) = 150 - X(\omega)$, that will be counting the number of checked items. b. The binomial distribution is the distribution that gives the probability of the number of "succeses" in a sequence of random and indipendent boolean values. This is the case for counting the number of broken object in a group of 150 and the probability of being broken of 4%. c. For computing the probability that at most 4 objects are broken we need to sum the probabilities that $k$ objects are broken with $k \in [0, 4]$. $P(<5) = \sum_{k=0}^{150} P(X=k) = \sum_{k=0}^{150} {150\choose k}p^k(1-p)^{150-k}$ The probability is 28 %
p = 1. / 365 1 - np.sum(binomial(p, 23 * (22) / 2, 0))
UQ/assignment_3/Assignment 3.ipynb
LorenzoBi/courses
mit
53a412a477b85d65a63496015f03889d
Quick Start
# import data data_fc, data_p_value = kinact.get_example_data() # import prior knowledge adj_matrix = kinact.get_kinase_targets() print data_fc.head() print print data_p_value.head() # Perform ksea using the Mean method score, p_value = kinact.ksea.ksea_mean(data_fc=data_fc['5min'].dropna(), interactions=adj_matrix, mP=data_fc['5min'].values.mean(), delta=data_fc['5min'].values.std()) print pd.DataFrame({'score': score, 'p_value': p_value}).head() # Perform ksea using the Alternative Mean method score, p_value = kinact.ksea.ksea_mean_alt(data_fc=data_fc['5min'].dropna(), p_values=data_p_value['5min'], interactions=adj_matrix, mP=data_fc['5min'].values.mean(), delta=data_fc['5min'].values.std()) print pd.DataFrame({'score': score, 'p_value': p_value}).head() # Perform ksea using the Delta method score, p_value = kinact.ksea.ksea_delta(data_fc=data_fc['5min'].dropna(), p_values=data_p_value['5min'], interactions=adj_matrix) print pd.DataFrame({'score': score, 'p_value': p_value}).head()
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
6872721622ac89fb0814c0e7e1e99eb1
1. Loading the data In order to perform the described kinase enrichment analysis, we load the data into a Pandas DataFrame. Here, we use the data from <em>de Graaf et al., 2014</em> for demonstration of KSEA. The data is available as supplemental material to the article online under http://mcponline.org/content/13/9/2426/suppl/DC1. The dataset of interest can be found in the Supplemental Table 2. When downloading the dataset from the internet, it will be provided as Excel spreadsheet. For the use in this script, it will have to saved as csv-file, using the 'Save As' function in Excel. In the accompanying github repository, we will provide an already processed csv-file together with the code for KSEA.
# Read data data_raw = pd.read_csv('../kinact/data/deGraaf_2014_jurkat.csv', sep=',', header=0) # Filter for those p-sites that were matched ambiguously data_reduced = data_raw[~data_raw['Proteins'].str.contains(';')] # Create identifier for each phosphorylation site, e.g. P06239_S59 for the Serine 59 in the protein Lck data_reduced.loc[:, 'ID'] = data_reduced['Proteins'] + '_' + data_reduced['Amino acid'] + \ data_reduced['Positions within proteins'] data_indexed = data_reduced.set_index('ID') # Extract only relevant columns data_relevant = data_indexed[[x for x in data_indexed if x.startswith('Average')]] # Rename columns data_relevant.columns = [x.split()[-1] for x in data_relevant] # Convert abundaces into fold changes compared to control (0 minutes after stimulation) data_fc = data_relevant.sub(data_relevant['0min'], axis=0) data_fc.drop('0min', axis=1, inplace=True) # Also extract the p-values for the fold changes data_p_value = data_indexed[[x for x in data_indexed if x.startswith('p value') and x.endswith('vs0min')]] data_p_value.columns = [x.split('_')[-1].split('vs')[0] + 'min' for x in data_p_value] data_p_value = data_p_value.astype('float') # Excel saved the p-values as strings, not as floating point numbers print data_fc.head() print data_p_value.head()
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
f373447a48140a8bef5929fbceab3da3
2. Import prior-knowledge kinase-substrate relationships from PhosphoSitePlus In the following example, we use the data from the PhosphoSitePlus database, which can be downloaded here: http://www.phosphosite.org/staticDownloads.action. Consider, that the downloaded file contains a disclaimer at the top of the file, which has to be removed before the file can be used as described below.
# Read data ks_rel = pd.read_csv('../kinact/data/PhosphoSitePlus.txt', sep='\t') # The data from the PhosphoSitePlus database is not provided as comma-separated value file (csv), # but instead, a tab = \t delimits the individual cells # Restrict the data on interactions in the organism of interest ks_rel_human = ks_rel.loc[(ks_rel['KIN_ORGANISM'] == 'human') & (ks_rel['SUB_ORGANISM'] == 'human')] # Create p-site identifier of the same format as before ks_rel_human.loc[:, 'psite'] = ks_rel_human['SUB_ACC_ID'] + '_' + ks_rel_human['SUB_MOD_RSD'] # Create adjencency matrix (links between kinases (columns) and p-sites (rows) are indicated with a 1, NA otherwise) ks_rel_human.loc[:, 'value'] = 1 adj_matrix = pd.pivot_table(ks_rel_human, values='value', index='psite', columns='GENE', fill_value=0) print adj_matrix.head() print adj_matrix.sum(axis=0).sort_values(ascending=False).head()
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
67745c12e5c9e231120efae0b274e343
3. KSEA 3.1 Quick start for KSEA Together with this tutorial, we will provide an implementation of KSEA as custom Python functions. Examplary, the use of the function for the dataset by de Graaf et al. could look like this.
score, p_value = kinact.ksea.ksea_delta(data_fc=data_fc['5min'], p_values=data_p_value['5min'], interactions=adj_matrix, ) print pd.DataFrame({'score': score, 'p_value': p_value}).head() # Calculate the KSEA scores for all data with the ksea_mean method activity_mean = pd.DataFrame({c: kinact.ksea.ksea_mean(data_fc=data_fc[c], interactions=adj_matrix, mP=data_fc.values.mean(), delta=data_fc.values.std())[0] for c in data_fc}) activity_mean = activity_mean[['5min', '10min', '20min', '30min', '60min']] print activity_mean.head() # Calculate the KSEA scores for all data with the ksea_mean method, using the median activity_median = pd.DataFrame({c: kinact.ksea.ksea_mean(data_fc=data_fc[c], interactions=adj_matrix, mP=data_fc.values.mean(), delta=data_fc.values.std(), median=True)[0] for c in data_fc}) activity_median = activity_median[['5min', '10min', '20min', '30min', '60min']] print activity_median.head() # Calculate the KSEA scores for all data with the ksea_mean_alt method activity_mean_alt = pd.DataFrame({c: kinact.ksea.ksea_mean_alt(data_fc=data_fc[c], p_values=data_p_value[c], interactions=adj_matrix, mP=data_fc.values.mean(), delta=data_fc.values.std())[0] for c in data_fc}) activity_mean_alt = activity_mean_alt[['5min', '10min', '20min', '30min', '60min']] print activity_mean_alt.head() # Calculate the KSEA scores for all data with the ksea_mean method, using the median activity_median_alt = pd.DataFrame({c: kinact.ksea.ksea_mean_alt(data_fc=data_fc[c], p_values=data_p_value[c], interactions=adj_matrix, mP=data_fc.values.mean(), delta=data_fc.values.std(), median=True)[0] for c in data_fc}) activity_median_alt = activity_median_alt[['5min', '10min', '20min', '30min', '60min']] print activity_median_alt.head() # Calculate the KSEA scores for all data with the ksea_delta method activity_delta = pd.DataFrame({c: kinact.ksea.ksea_delta(data_fc=data_fc[c], p_values=data_p_value[c], interactions=adj_matrix)[0] for c in data_fc}) activity_delta = activity_delta[['5min', '10min', '20min', '30min', '60min']] print activity_delta.head() sns.set(context='poster', style='ticks') sns.heatmap(activity_mean_alt, cmap=sns.blend_palette([sns.xkcd_rgb['amber'], sns.xkcd_rgb['almost black'], sns.xkcd_rgb['bright blue']], as_cmap=True)) plt.show()
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
b9926c753f868cd11bf910a51b0aa5d7
In de Graaf et al., they associated (amongst others) the Casein kinase II alpha (CSNK2A1) with higher activity after prolonged stimulation with prostaglandin E2. Here, we plot the activity scores of CSNK2A1 for all three methods of KSEA, which are in good agreement.
kinase='CSNK2A1' df_plot = pd.DataFrame({'mean': activity_mean.loc[kinase], 'delta': activity_delta.loc[kinase], 'mean_alt': activity_mean_alt.loc[kinase]}) df_plot['time [min]'] = [5, 10, 20, 30, 60] df_plot = pd.melt(df_plot, id_vars='time [min]', var_name='method', value_name='activity score') g = sns.FacetGrid(df_plot, col='method', sharey=False, size=3, aspect=1) g = g.map(sns.pointplot, 'time [min]', 'activity score') plt.subplots_adjust(top=.82) plt.show()
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
00c39dd122a72e37c1681c17b04d2c2c
3.2. KSEA in detail In the following, we show in detail the computations that are carried out inside the provided functions. Let us concentrate on a single condition (60 minutes after stimulation with prostaglandin E2) and a single kinase (CDK1).
data_condition = data_fc['60min'].copy() p_values = data_p_value['60min'] kinase = 'CDK1' substrates = adj_matrix[kinase].replace(0, np.nan).dropna().index detected_p_sites = data_fc.index intersect = list(set(substrates).intersection(detected_p_sites))
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
c73a1171d7d35fcd666d404dc1d67fed
3.2.1. Mean method
mS = data_condition.loc[intersect].mean() mP = data_fc.values.mean() m = len(intersect) delta = data_fc.values.std() z_score = (mS - mP) * np.sqrt(m) * 1/delta from scipy.stats import norm p_value_mean = norm.sf(abs(z_score)) print mS, p_value_mean
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
33bf4e05a8e48d64f8f2c23831067f57
3.2.2. Alternative Mean method
cut_off = -np.log10(0.05) set_alt = data_condition.loc[intersect].where(p_values.loc[intersect] > cut_off).dropna() mS_alt = set_alt.mean() z_score_alt = (mS_alt - mP) * np.sqrt(len(set_alt)) * 1/delta p_value_mean_alt = norm.sf(abs(z_score_alt)) print mS_alt, p_value_mean_alt
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
b2511e3d0da66418f478019c9ff923f4
3.2.3. Delta Method
cut_off = -np.log10(0.05) score_delta = len(data_condition.loc[intersect].where((data_condition.loc[intersect] > 0) & (p_values.loc[intersect] > cut_off)).dropna()) -\ len(data_condition.loc[intersect].where((data_condition.loc[intersect] < 0) & (p_values.loc[intersect] > cut_off)).dropna()) M = len(data_condition) n = len(intersect) N = len(np.where(p_values.loc[adj_matrix.index.tolist()] > cut_off)[0]) from scipy.stats import hypergeom hypergeom_dist = hypergeom(M, n, N) p_value_delta = hypergeom_dist.pmf(len(p_values.loc[intersect].where(p_values.loc[intersect] > cut_off).dropna())) print score_delta, p_value_delta
doc/KSEA_example.ipynb
saezlab/kinact
gpl-3.0
7ddf2fd32123c80240305caa032ce010
1. Basic Linear Model
LM = keras.model.Sequential([Dense(Num_Classes, input_shape=(784,))]) LM.compile(optimizer=SGD(lr=0.01), loss='mse') # LM.compile(optimizer=RMSprop(lr=0.01), loss='mse')
FAI_old/lesson2/L2HW.ipynb
WNoxchi/Kaukasos
mit
f1dd7d733493c468144e15f01f50f9bd
2. 1-Layer Neural Network 3. Finetuned VGG16
import os, sys sys.path.insert(os.path.join(1, '../utils/')) import Vgg16
FAI_old/lesson2/L2HW.ipynb
WNoxchi/Kaukasos
mit
02e276e2c4444ceeef9f60c2f61a461c
Setting up the notebook's environment Install AI Platform Pipelines client library For AI Platform Pipelines (Unified), which is in the Experimental stage, you need to download and install the AI Platform client library on top of the KFP and TFX SDKs that were installed as part of the initial environment setup.
AIP_CLIENT_WHEEL = "aiplatform_pipelines_client-0.1.0.caip20201123-py3-none-any.whl" AIP_CLIENT_WHEEL_GCS_LOCATION = ( f"gs://cloud-aiplatform-pipelines/releases/20201123/{AIP_CLIENT_WHEEL}" ) !gsutil cp {AIP_CLIENT_WHEEL_GCS_LOCATION} {AIP_CLIENT_WHEEL} %pip install {AIP_CLIENT_WHEEL}
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
4f7a1ae29032326272ffbd30789a3f47
Import notebook dependencies
import logging import tensorflow as tf import tfx from aiplatform.pipelines import client from tfx.orchestration.beam.beam_dag_runner import BeamDagRunner print("TFX Version: ", tfx.__version__)
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
e8f1603c188ad81994e00d0ee3c4f77e
Configure GCP environment If you're on AI Platform Notebooks, authenticate with Google Cloud before running the next section, by running sh gcloud auth login in the Terminal window (which you can open via File > New in the menu). You only need to do this once per notebook instance. Set the following constants to the values reflecting your environment: PROJECT_ID - your GCP project ID PROJECT_NUMBER - your GCP project number BUCKET_NAME - a name of the GCS bucket that will be used to host artifacts created by the pipeline PIPELINE_NAME_SUFFIX - a suffix appended to the standard pipeline name. You can change to differentiate between pipelines from different users in a classroom environment API_KEY - a GCP API key VPC_NAME - a name of the GCP VPC to use for the index deployments. REGION - a compute region. Don't change the default - us-central - while the ANN Service is in the experimental stage
PROJECT_ID = "jk-mlops-dev" # <---CHANGE THIS PROJECT_NUMBER = "895222332033" # <---CHANGE THIS API_KEY = "AIzaSyBS_RiaK3liaVthTUD91XuPDKIbiwDFlV8" # <---CHANGE THIS USER = "user" # <---CHANGE THIS BUCKET_NAME = "jk-ann-staging" # <---CHANGE THIS VPC_NAME = "default" # <---CHANGE THIS IF USING A DIFFERENT VPC REGION = "us-central1" PIPELINE_NAME = "ann-pipeline-{}".format(USER) PIPELINE_ROOT = "gs://{}/pipeline_root/{}".format(BUCKET_NAME, PIPELINE_NAME) PATH=%env PATH %env PATH={PATH}:/home/jupyter/.local/bin print("PIPELINE_ROOT: {}".format(PIPELINE_ROOT))
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
ae65fc2df47da9b396d64e201bf3e1ad
Defining custom components In this section of the notebook you define a set of custom TFX components that encapsulate BQ, BQML and ANN Service calls. The components are TFX Custom Python function components. Each component is created as a separate Python module. You also create a couple of helper modules that encapsulate Python functions and classess used across the custom components. Remove files created in the previous executions of the notebook
component_folder = "bq_components" if tf.io.gfile.exists(component_folder): print("Removing older file") tf.io.gfile.rmtree(component_folder) print("Creating component folder") tf.io.gfile.mkdir(component_folder) %cd {component_folder}
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
b51d870f3c0d84bddacdc02098d58c7c
Creating a TFX pipeline The pipeline automates the process of preparing item embeddings (in BigQuery), training a Matrix Factorization model (in BQML), and creating and deploying an ANN Service index. The pipeline has a simple sequential flow. The pipeline accepts a set of runtime parameters that define GCP environment settings and embeddings and index assembly parameters.
import os from compute_pmi import compute_pmi from create_index import create_index from deploy_index import deploy_index from export_embeddings import export_embeddings from extract_embeddings import extract_embeddings from tfx.orchestration.kubeflow.v2 import kubeflow_v2_dag_runner # Only required for local run. from tfx.orchestration.metadata import sqlite_metadata_connection_config from tfx.orchestration.pipeline import Pipeline from train_item_matching import train_item_matching_model def ann_pipeline( pipeline_name, pipeline_root, metadata_connection_config, project_id, project_number, region, vpc_name, bq_dataset_name, min_item_frequency, max_group_size, dimensions, embeddings_gcs_location, index_display_name, deployed_index_id_prefix, ) -> Pipeline: """Implements the SCANN training pipeline.""" pmi_computer = compute_pmi( project_id=project_id, bq_dataset=bq_dataset_name, min_item_frequency=min_item_frequency, max_group_size=max_group_size, ) bqml_trainer = train_item_matching_model( project_id=project_id, bq_dataset=bq_dataset_name, item_cooc=pmi_computer.outputs.item_cooc, dimensions=dimensions, ) embeddings_extractor = extract_embeddings( project_id=project_id, bq_dataset=bq_dataset_name, bq_model=bqml_trainer.outputs.bq_model, ) embeddings_exporter = export_embeddings( project_id=project_id, gcs_location=embeddings_gcs_location, item_embeddings_bq=embeddings_extractor.outputs.item_embeddings, ) index_constructor = create_index( project_id=project_id, project_number=project_number, region=region, display_name=index_display_name, dimensions=dimensions, item_embeddings=embeddings_exporter.outputs.item_embeddings_gcs, ) index_deployer = deploy_index( project_id=project_id, project_number=project_number, region=region, vpc_name=vpc_name, deployed_index_id_prefix=deployed_index_id_prefix, ann_index=index_constructor.outputs.ann_index, ) components = [ pmi_computer, bqml_trainer, embeddings_extractor, embeddings_exporter, index_constructor, index_deployer, ] return Pipeline( pipeline_name=pipeline_name, pipeline_root=pipeline_root, # Only needed for local runs. metadata_connection_config=metadata_connection_config, components=components, )
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
c2ea2279300a0067211b3d8f2425a9da
Testing the pipeline locally You will first run the pipeline locally using the Beam runner. Clean the metadata and artifacts from the previous runs
pipeline_root = f"/tmp/{PIPELINE_NAME}" local_mlmd_folder = "/tmp/mlmd" if tf.io.gfile.exists(pipeline_root): print("Removing previous artifacts...") tf.io.gfile.rmtree(pipeline_root) if tf.io.gfile.exists(local_mlmd_folder): print("Removing local mlmd SQLite...") tf.io.gfile.rmtree(local_mlmd_folder) print("Creating mlmd directory: ", local_mlmd_folder) tf.io.gfile.mkdir(local_mlmd_folder) print("Creating pipeline root folder: ", pipeline_root) tf.io.gfile.mkdir(pipeline_root)
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
ca9d2323fef55547307e60c9403d8d8f
Set pipeline parameters and create the pipeline
bq_dataset_name = "song_embeddings" index_display_name = "Song embeddings" deployed_index_id_prefix = "deployed_song_embeddings_" min_item_frequency = 15 max_group_size = 100 dimensions = 50 embeddings_gcs_location = f"gs://{BUCKET_NAME}/embeddings" metadata_connection_config = sqlite_metadata_connection_config( os.path.join(local_mlmd_folder, "metadata.sqlite") ) pipeline = ann_pipeline( pipeline_name=PIPELINE_NAME, pipeline_root=pipeline_root, metadata_connection_config=metadata_connection_config, project_id=PROJECT_ID, project_number=PROJECT_NUMBER, region=REGION, vpc_name=VPC_NAME, bq_dataset_name=bq_dataset_name, index_display_name=index_display_name, deployed_index_id_prefix=deployed_index_id_prefix, min_item_frequency=min_item_frequency, max_group_size=max_group_size, dimensions=dimensions, embeddings_gcs_location=embeddings_gcs_location, )
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
12726c48f9889f1138646ebf82f10360
Inspect produced metadata During the execution of the pipeline, the inputs and outputs of each component have been tracked in ML Metadata.
from ml_metadata import metadata_store from ml_metadata.proto import metadata_store_pb2 connection_config = metadata_store_pb2.ConnectionConfig() connection_config.sqlite.filename_uri = os.path.join( local_mlmd_folder, "metadata.sqlite" ) connection_config.sqlite.connection_mode = 3 # READWRITE_OPENCREATE store = metadata_store.MetadataStore(connection_config) store.get_artifacts()
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
b2865f65dee37537cc818151581304a4
Set the the parameters for AIPP execution and create the pipeline
metadata_connection_config = None pipeline_root = PIPELINE_ROOT pipeline = ann_pipeline( pipeline_name=PIPELINE_NAME, pipeline_root=pipeline_root, metadata_connection_config=metadata_connection_config, project_id=PROJECT_ID, project_number=PROJECT_NUMBER, region=REGION, vpc_name=VPC_NAME, bq_dataset_name=bq_dataset_name, index_display_name=index_display_name, deployed_index_id_prefix=deployed_index_id_prefix, min_item_frequency=min_item_frequency, max_group_size=max_group_size, dimensions=dimensions, embeddings_gcs_location=embeddings_gcs_location, )
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
19ef3e26a5a9065206de1825b13774d1
Compile the pipeline
config = kubeflow_v2_dag_runner.KubeflowV2DagRunnerConfig( project_id=PROJECT_ID, display_name=PIPELINE_NAME, default_image="gcr.io/{}/caip-tfx-custom:{}".format(PROJECT_ID, USER), ) runner = kubeflow_v2_dag_runner.KubeflowV2DagRunner( config=config, output_filename="pipeline.json" ) runner.compile(pipeline, write_out=True)
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
d7d02827a07a7fba1ad0ff49749e2d4b
Submit the pipeline run
aipp_client.create_run_from_job_spec("pipeline.json")
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/ann02_run_pipeline.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
e4faeb2c5ac5fa335990d3ec6385a0eb
creates in memory an object with the name "ObjectCreator".
%%HTML <p style="color:red;font-size: 150%;">This object (the class) is itself capable of creating objects (the instances), and this is why it's a class.</p>
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
1754e4631ba44103ccf696b324556eb3
But still, it's an object, and therefore: you can assign it to a variable
object_creator_class = ObjectCreator print(object_creator_class)
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
b3929169c4654f3e5fc072999b2bd1f7
you can copy it
from copy import copy ObjectCreatorCopy = copy(ObjectCreator) print(ObjectCreatorCopy) print("copy ObjectCreatorCopy is not ObjectCreator: ", ObjectCreatorCopy is not ObjectCreator) print("variable object_creator_class is ObjectCreator: ", object_creator_class is ObjectCreator)
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
4e2e4ce5cde64999ebcd0a5ef995d9ed
you can add attributes to it
print("ObjectCreator has an attribute 'new_attribute': ", hasattr(ObjectCreator, 'new_attribute')) ObjectCreator.new_attribute = 'foo' # you can add attributes to a class print("ObjectCreator has an attribute 'new_attribute': ", hasattr(ObjectCreator, 'new_attribute')) print("attribute 'new_attribute': ", ObjectCreator.new_attribute)
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
573111c0aabcdcafd64013051ce547bc
you can pass it as a function parameter
def echo(o): print(o) # you can pass a class as a parameter print("return value of passing Object Creator to {}: ".format(echo), echo(ObjectCreator)) %%HTML <p style="color:red;font-size: 150%;">Since classes are objects, you can create them on the fly, like any object.</p> def get_class_by(name): class Foo: pass class Bar: pass classes = { 'foo': Foo, 'bar': Bar } return classes.get(name, None) for class_ in (get_class_by(name) for name in ('foo', 'bar', )): pprint(class_)
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
17fcfaf8d2bb2a15b2aeb63692f3f8ac
But it's not so dynamic, since you still have to write the whole class yourself. Since classes are objects, they must be generated by something. When you use the class keyword, Python creates this object automatically. But as with most things in Python, it gives you a way to do it manually. Remember the function type? The good old function that lets you know what type an object is:
print(type(1)) print(type("1")) print(type(int)) print(type(ObjectCreator)) print(type(type))
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
b8d11b76d82343b13b0e63480592c05d
Well, type has a completely different ability, it can also create classes on the fly. type can take the description of a class as parameters, and return a class.
classes = Foo, Bar = [type(name, (), {}) for name in ('Foo', 'Bar')] for class_ in classes: pprint(class_)
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
bae3ca9d9da4d4f68cfa5c4e9c3b848b
type accepts a dictionary to define the attributes of the class. So:
classes_with_attributes = Foo, Bar = [type(name, (), namespace) for name, namespace in zip( ('Foo', 'Bar'), ( {'assigned_attr': 'foo_attr'}, {'assigned_attr': 'bar_attr'} ) ) ] for class_ in classes_with_attributes: pprint([item for item in vars(class_).items()])
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
b62f05baa7f7df3c79b864e500b7c3fe
Eventually you'll want to add methods to your class. Just define a function with the proper signature and assign it as an attribute.
def an_added_function(self): return "I am an added function." Foo.added = an_added_function foo = Foo() print(foo.added())
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
46a1fbe16b0331ee8f6bc95a4e2488e2
You see where we are going: in Python, classes are objects, and you can create a class on the fly, dynamically.
%%HTML <p style="color:red;font-size: 150%;">[Creating a class on the fly, dynamically] is what Python does when you use the keyword class, and it does so by using a metaclass.</p> %%HTML <p style="color:red;font-size: 150%;">Metaclasses are the 'stuff' that creates classes.</p>
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
e3af60965cd1393dd8f54c035c829d6e
You define classes in order to create objects, right? But we learned that Python classes are objects.
%%HTML <p style="color:red;font-size: 150%;">Well, metaclasses are what create these objects. They are the classes' classes.</p> %%HTML <p style="color:red;font-size: 150%;">Everything, and I mean everything, is an object in Python. That includes ints, strings, functions and classes. All of them are objects. And all of them have been created from a class (which is also an object).</p>
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
49826a0320e51271e3a01a25fd299572
Changing to blog post entitled Python 3 OOP Part 5—Metaclasses object, which inherits from nothing. reminds me of Eastern teachings of 'sunyata': emptiness, voidness, openness, nonexistence, thusness, etc. ```python a = 5 type(a) <class 'int'> a.class <class 'int'> a.class.bases (<class 'object'>,) object.bases () # object, which inherits from nothing. type(a) <class 'int'> type(int) <class 'type'> type(float) <class 'type'> type(dict) <class 'type'> type(object) <class 'type'> type.bases (<class 'object'>,) ``` When you think you grasped the type/object matter read this and start thinking again ```python type(type) <class 'type'> ```
class MyType(type): pass class MySpecialClass(metaclass=MyType): pass msp = MySpecialClass() type(msp) type(MySpecialClass) type(MyType)
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
686fe6fe08f5b7c1773126ee8e28e9de
Metaclasses are a very advanced topic in Python, but they have many practical uses. For example, by means of a custom metaclass you may log any time a class is instanced, which can be important for applications that shall keep a low memory usage or have to monitor it.
%%HTML <p style="color:red;font-size: 150%;">"Build a class"? This is a task for metaclasses. The following implementation comes from Python 3 Patterns, Recipes and Idioms.</p> class Singleton(type): instance = None def __call__(cls, *args, **kwargs): if not cls.instance: cls.instance = super(Singleton, cls).__call__(*args, **kwargs) return cls.instance class ASingleton(metaclass=Singleton): pass a = ASingleton() b = ASingleton() print(a is b) print(hex(id(a))) print(hex(id(b)))
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
47626ce590bc1bb3358be3e243911df6
The constructor mechanism in Python is on the contrary very important, and it is implemented by two methods, instead of just one: new() and init().
%%HTML <p style="color:red;font-size: 150%;">The tasks of the two methods are very clear and distinct: __new__() shall perform actions needed when creating a new instance while __init__ deals with object initialization.</p> class MyClass: def __new__(cls, *args, **kwargs): obj = super().__new__(cls, *args, **kwargs) # do something here obj.one = 1 return obj # instance of the container class, so __init__ is called %%HTML <p style="color:red;font-size: 150%;"> Anyway, __init__() will be called only if you return an instance of the container class. </p> my_class = MyClass() my_class.one class MyInt: def __new__(cls, *args, **kwargs): obj = super().__new__(cls, *args, **kwargs) obj.join = ':'.join return obj mi = MyInt() print(mi.join(str(n) for n in range(10)))
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
c1080d1d32d97ecf6517afe3cca3b564
Subclassing int Object creation is behaviour. For most classes it is enough to provide a different __init__ method, but for immutable classes one often have to provide a different __new__ method. In this subsection, as preparation for enumerated integers, we will start to code a subclass of int that behave like bool. We will start with string representation, which is fairly easy.
class MyBool(int): def __repr__(self): return 'MyBool.' + ['False', 'True'][self] t = MyBool(1) t bool(2) == 1 MyBool(2) == 1 %%HTML <p style="color:red;font-size: 150%;">In many classes we use __init__ to mutate the newly constructed object, typically by storing or otherwise using the arguments to __init__. But we can’t do this with a subclass of int (or any other immuatable) because they are immutable.</p>
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
bd733091c0705cc65f87f3382d6ed632
The solution to the problem is to use new. Here we will show that it works, and later we will explain elsewhere exactly what happens.
bool.__doc__ class NewBool(int): def __new__(cls, value): # bool return int.__new__(cls, bool(value)) y = NewBool(56) y == 1
content/posts/meditations/Python_objects.ipynb
dm-wyncode/zipped-code
mit
938f9bfbe539d6de6c393596160c36be
<b>Question 4 Do multiple languages influent the reviews of apps?</b>
multi_language = app.loc[app['multiple languages'] == 'Y'] sin_language = app.loc[app['multiple languages'] == 'N'] multi_language['overall rating'].plot(kind = "density") sin_language['overall rating'].plot(kind = "density") plt.xlabel('Overall Rating') plt.legend(labels = ['multiple languages','single language'], loc='upper right') plt.title('Distribution of overall rating among apps with multiple/single languages') plt.show()
notebooks/Multiple Languages Effects Analysis (Q4).ipynb
jpzhangvincent/MobileAppMarketAnalysis
mit
7e32d3b69c66003b58ddbca92fc64b12
<p>First, the data set is splitted into two parts, one is app with multiple languages and another is app with single language. Then the density plots for the two subsets are made and from the plots we can see that the overall rating of apps with multiple languages is generally higher than the overall rating of apps with single language. Some specific tests are still needed to perform.</p>
import scipy.stats multi_language = list(multi_language['overall rating']) sin_language = list(sin_language['overall rating']) multiple = [] single = [] for each in multi_language: if each > 0: multiple.append(each) for each in sin_language: if each > 0: single.append(each) print(np.mean(multiple)) print(np.mean(single)) scipy.stats.ttest_ind(multiple, single, equal_var = False)
notebooks/Multiple Languages Effects Analysis (Q4).ipynb
jpzhangvincent/MobileAppMarketAnalysis
mit
519cc0aa2d4736da505ef9d30c91f1b5
<p>I perform t test here. We have two samples here, one is apps with multiple languages and another is apps with single language. So I want to test whether the mean overall rating for these two samples are different.</p> <p>The null hypothesis is mean overall rating for apps with multiple languages and mean overall rating for apps with single language are the same and the alternative hypothesis is that the mean overall rating for these two samples are not the same.</p> <p>From the result we can see that the p value is 1.7812330368645647e-26, which is smaller than 0.05, so we should reject null hypothesis at significance level 0.05, that is, we should conclude that the mean of overall rating for these two samples are not the same and multiple languages do influent the rating of an app.</p>
scipy.stats.f_oneway(multiple, single)
notebooks/Multiple Languages Effects Analysis (Q4).ipynb
jpzhangvincent/MobileAppMarketAnalysis
mit
665b5eb914c4608592024388b73627a7
<p>I also perform one-way ANOVA test here.</p> <p>The null hypothesis is mean overall rating for apps with multiple languages and mean overall rating for apps with single language are the same and the alternative hypothesis is that the mean overall rating for these two samples are not the same.</p> <p>From the result we can see that the p value is 3.0259308024434954e-26, which is smaller than 0.05, so we should reject null hypothesis at significance level 0.05, that is, we should conclude that the mean of overall rating for these two samples are not the same and multiple languages do influent the rating of an app.</p>
scipy.stats.kruskal(multiple, single)
notebooks/Multiple Languages Effects Analysis (Q4).ipynb
jpzhangvincent/MobileAppMarketAnalysis
mit
2718d7bb04dfeac4d8783e8cad7084ed
<span> Let's parse </span>
from hit.process.processor import ATTMatrixHitProcessor from hit.process.processor import ATTPlainHitProcessor plainProcessor = ATTPlainHitProcessor() matProcessor = ATTMatrixHitProcessor()
notebooks/Hit Processor.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
1faaacdac956bca3e6f2595f22139fef
<span> Parse a Hit with Plain Processor </span>
plainHit = plainProcessor.parse_hit("hit: {0:25 1549:4 2757:4 1392:4 2264:7 1764:7 1942:5 2984:5 r}") print plainHit
notebooks/Hit Processor.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
51a83719ed1571fdbd4905bfe01dc8df
<span> Compute diffs: </span>
plainDiffs = plainProcessor.hit_diffs(plainHit["sensor_timings"]) print plainDiffs
notebooks/Hit Processor.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
d70f8172710ad2a1fb21cc6bf12a7947
<span> Parse a Hit with Matrix Processor </span>
matHit = matProcessor.parse_hit("hit: {0:25 1549:4 2757:4 1392:4 2264:7 1764:7 1942:5 2984:5 r}") print matHit
notebooks/Hit Processor.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
cfd3302e85ab494dc3f240472318dace
<span> Compute diffs: </span>
matDiffs = matProcessor.hit_diffs((matHit["sensor_timings"])) print matDiffs matDiffs
notebooks/Hit Processor.ipynb
Centre-Alt-Rendiment-Esportiu/att
gpl-3.0
b624d4290a88dec94e7d588ae1513b6c
Tensor multiplication with transpose in numpy and einsum
w = np.arange(6).reshape(2,3).astype(np.float32) x = np.ones((1,3), dtype=np.float32) print("w:\n", w) print("x:\n", x) y = np.matmul(w, np.transpose(x)) print("y:\n", y) y = einsum('ij,kj->ik', w, x) print("y:\n", y)
versions/2022/tools/python/einsum_demo.ipynb
roatienza/Deep-Learning-Experiments
mit
f4b2f322cda34de45f3c7c506cfd3f01
Properties of square matrices in numpy and einsum We demonstrate diagonal.
w = np.arange(9).reshape(3,3).astype(np.float32) d = np.diag(w) print("w:\n", w) print("d:\n", d) d = einsum('ii->i', w) print("d:\n", d)
versions/2022/tools/python/einsum_demo.ipynb
roatienza/Deep-Learning-Experiments
mit
019ab1867f001899c1645601d73950cf
Trace.
t = np.trace(w) print("t:\n", t) t = einsum('ii->', w) print("t:\n", t)
versions/2022/tools/python/einsum_demo.ipynb
roatienza/Deep-Learning-Experiments
mit
8de3bd0945585b8d789dc1086d3c2119
Sum along an axis.
s = np.sum(w, axis=0) print("s:\n", s) s = einsum('ij->j', w) print("s:\n", s)
versions/2022/tools/python/einsum_demo.ipynb
roatienza/Deep-Learning-Experiments
mit
47c0ba9c419deb81b965a4045ec0974f
Let us demonstrate tensor transpose. We can also use w.T to transpose w in numpy.
t = np.transpose(w) print("t:\n", t) t = einsum("ij->ji", w) print("t:\n", t)
versions/2022/tools/python/einsum_demo.ipynb
roatienza/Deep-Learning-Experiments
mit
fddb9bd19b9f7a1f7c1fdbf4f2b179b9
Dot, inner and outer products in numpy and einsum.
a = np.ones((3,), dtype=np.float32) b = np.ones((3,), dtype=np.float32) * 2 print("a:\n", a) print("b:\n", b) d = np.dot(a,b) print("d:\n", d) d = einsum("i,i->", a, b) print("d:\n", d) i = np.inner(a, b) print("i:\n", i) i = einsum("i,i->", a, b) print("i:\n", i) o = np.outer(a,b) print("o:\n", o) o = einsum("i,j->ij", a, b) print("o:\n", o)
versions/2022/tools/python/einsum_demo.ipynb
roatienza/Deep-Learning-Experiments
mit
30ea59eb0e7a1eef720456d744a5690c
Inheritance Inheritance is an OOP practice where a certain class(called subclass/child class) inherits the properties namely data and behaviour of another class(called superclass/parent class). Let us see through an example.
# BITSian class class BITSian(): def __init__(self, name, id_no, hostel): self.name = name self.id_no = id_no self.hostel = hostel def get_name(self): return self.name def get_id(self): return self.id_no def get_hostel(self): return self.hostel # IITian class class IITian(): def __init__(self, name, id_no, hall): self.name = name self.id_no = id_no self.hall = hall def get_name(self): return self.name def get_id(self): return self.id_no def get_hall(self): return self.hall
Week 4/Lecture_9_Inheritance_Overloading_Overidding.ipynb
bpgc-cte/python2017
mit
4e35c52d527898ec951906fba4ab4248
While writing code you must always make sure that you keep it as concise as possible and avoid any sort of repitition. Now, we can clearly see the commonalitites between BITSian and IITian classes. It would be natural to assume that every college student whether from BITS or IIT or pretty much any other institution in the world will have a name and a unique ID number. Such a degree of commonality means that there could be a higher level of abstraction to describe both BITSian and IITian to a decent extent.
class CollegeStudent(): def __init__(self, name, id_no): self.name = name self.id_no = id_no def get_name(self): return self.name def get_id(self): return self.id_no # BITSian class class BITSian(CollegeStudent): def __init__(self, name, id_no, hostel): self.name = name self.id_no = id_no self.hostel = hostel def get_hostel(self): return self.hostel # IITian class class IITian(CollegeStudent): def __init__(self, name, id_no, hall): self.name = name self.id_no = id_no self.hall = hall def get_hall(self): return self.hall a = BITSian("Arif", "2015B4A70370G", "AH-5") b = IITian("Abhishek", "2213civil32K", "Hall-10") print(a.get_name()) print(b.get_name()) print(a.get_hostel()) print(b.get_hall())
Week 4/Lecture_9_Inheritance_Overloading_Overidding.ipynb
bpgc-cte/python2017
mit
9bf11fa27b7fa279bb8075a371099c4b
So, the class definition is as such : class SubClassName(SuperClassName): Using super() The main usage of super() in Python is to refer to parent classes without naming them expicitly. This becomes really useful in multiple inheritance where you won't have to worry about parent class name.
class Student(): def __init__(self, name): self.name = name def get_name(self): return self.name class CollegeStudent(Student): def __init__(self, name, id_no): super().__init__(name) self.id_no = id_no def get_id(self): return self.id_no # BITSian class class BITSian(CollegeStudent): def __init__(self, name, id_no, hostel): super().__init__(name, id_no) self.hostel = hostel def get_hostel(self): return self.hostel # IITian class class IITian(CollegeStudent): def __init__(self, name, id_no, hall): super().__init__(name, id_no) self.hall = hall def get_hall(self): return self.hall a = BITSian("Arif", "2015B4A70370G", "AH-5") b = IITian("Abhishek", "2213civil32K", "Hall-10") print(a.get_name()) print(b.get_name()) print(a.get_hostel()) print(b.get_hall())
Week 4/Lecture_9_Inheritance_Overloading_Overidding.ipynb
bpgc-cte/python2017
mit
f75063e4457e7da0cb7603bb5b52fe6a
You may come across the following constructor call for a superclass on the net : super(self.__class__, self).__init__(). Please do not do this. It can lead to infinite recursion. Go through this link for more clarification : Understanding Python Super with init methods Method Overidding This is a phenomenon where a subclass method with the same name is executed in preference to it's superclass method with a similar name.
class Student(): def __init__(self, name): self.name = name def get_name(self): return "Student : " + self.name class CollegeStudent(Student): def __init__(self, name, id_no): super().__init__(name) self.id_no = id_no def get_id(self): return self.id_no def get_name(self): return "College Student : " + self.name class BITSian(CollegeStudent): def __init__(self, name, id_no, hostel): super().__init__(name, id_no) self.hostel = hostel def get_hostel(self): return self.hostel def get_name(self): return "Gen BITSian --> " + self.name class IITian(CollegeStudent): def __init__(self, name, id_no, hall): super().__init__(name, id_no) self.hall = hall def get_hall(self): return self.hall def get_name(self): return "IITian --> " + self.name a = BITSian("Arif", "2015B4A70370G", "AH-5") b = IITian("Abhishek", "2213civil32K", "Hall-10") print(a.get_name()) print(b.get_name()) print() print(super(BITSian, a).get_name()) print(super(IITian, b).get_name()) print(super(CollegeStudent, a).get_name())
Week 4/Lecture_9_Inheritance_Overloading_Overidding.ipynb
bpgc-cte/python2017
mit
09fd57c39be7da236874a86df2b97a83
In my experience it's more convenient to build the model with a log-softmax output using nn.LogSoftmax or F.log_softmax (documentation). Then you can get the actual probabilites by taking the exponential torch.exp(output). With a log-softmax output, you want to use the negative log likelihood loss, nn.NLLLoss (documentation). Exercise: Build a model that returns the log-softmax as the output and calculate the loss using the negative log likelihood loss.
## Solution # Build a feed-forward network model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10), nn.LogSoftmax(dim=1)) # Define the loss criterion = nn.NLLLoss() # Get our data images, labels = next(iter(trainloader)) # Flatten images images = images.view(images.shape[0], -1) # Forward pass, get our log-probabilities logps = model(images) # Calculate the loss with the logps and the labels loss = criterion(logps, labels) print(loss)
DEEP LEARNING/Pytorch from scratch/MLP/Part 3 - Training Neural Networks (Solution).ipynb
Diyago/Machine-Learning-scripts
apache-2.0
d6facf227404f259cb24853945682fd0
Autograd Now that we know how to calculate a loss, how do we use it to perform backpropagation? Torch provides a module, autograd, for automatically calculating the gradients of tensors. We can use it to calculate the gradients of all our parameters with respect to the loss. Autograd works by keeping track of operations performed on tensors, then going backwards through those operations, calculating gradients along the way. To make sure PyTorch keeps track of operations on a tensor and calculates the gradients, you need to set requires_grad = True on a tensor. You can do this at creation with the requires_grad keyword, or at any time with x.requires_grad_(True). You can turn off gradients for a block of code with the torch.no_grad() content: ```python x = torch.zeros(1, requires_grad=True) with torch.no_grad(): ... y = x * 2 y.requires_grad False ``` Also, you can turn on or off gradients altogether with torch.set_grad_enabled(True|False). The gradients are computed with respect to some variable z with z.backward(). This does a backward pass through the operations that created z.
x = torch.randn(2,2, requires_grad=True) print(x) y = x**2 print(y)
DEEP LEARNING/Pytorch from scratch/MLP/Part 3 - Training Neural Networks (Solution).ipynb
Diyago/Machine-Learning-scripts
apache-2.0
f8eff0b74e393b98b194a3c381e36a94
These gradients calculations are particularly useful for neural networks. For training we need the gradients of the weights with respect to the cost. With PyTorch, we run data forward through the network to calculate the loss, then, go backwards to calculate the gradients with respect to the loss. Once we have the gradients we can make a gradient descent step. Loss and Autograd together When we create a network with PyTorch, all of the parameters are initialized with requires_grad = True. This means that when we calculate the loss and call loss.backward(), the gradients for the parameters are calculated. These gradients are used to update the weights with gradient descent. Below you can see an example of calculating the gradients using a backwards pass.
# Build a feed-forward network model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10), nn.LogSoftmax(dim=1)) criterion = nn.NLLLoss() images, labels = next(iter(trainloader)) images = images.view(images.shape[0], -1) logps = model(images) loss = criterion(logps, labels) print('Before backward pass: \n', model[0].weight.grad) loss.backward() print('After backward pass: \n', model[0].weight.grad)
DEEP LEARNING/Pytorch from scratch/MLP/Part 3 - Training Neural Networks (Solution).ipynb
Diyago/Machine-Learning-scripts
apache-2.0
e820d04afa29bcf0a2c9751e91a4d4c4
Training the network! There's one last piece we need to start training, an optimizer that we'll use to update the weights with the gradients. We get these from PyTorch's optim package. For example we can use stochastic gradient descent with optim.SGD. You can see how to define an optimizer below.
from torch import optim # Optimizers require the parameters to optimize and a learning rate optimizer = optim.SGD(model.parameters(), lr=0.01)
DEEP LEARNING/Pytorch from scratch/MLP/Part 3 - Training Neural Networks (Solution).ipynb
Diyago/Machine-Learning-scripts
apache-2.0
754ed5ef799f52d419d2a9357c5f443d
Now we know how to use all the individual parts so it's time to see how they work together. Let's consider just one learning step before looping through all the data. The general process with PyTorch: Make a forward pass through the network Use the network output to calculate the loss Perform a backward pass through the network with loss.backward() to calculate the gradients Take a step with the optimizer to update the weights Below I'll go through one training step and print out the weights and gradients so you can see how it changes. Note that I have a line of code optimizer.zero_grad(). When you do multiple backwards passes with the same parameters, the gradients are accumulated. This means that you need to zero the gradients on each training pass or you'll retain gradients from previous training batches.
print('Initial weights - ', model[0].weight) images, labels = next(iter(trainloader)) images.resize_(64, 784) # Clear the gradients, do this because gradients are accumulated optimizer.zero_grad() # Forward pass, then backward pass, then update weights output = model(images) loss = criterion(output, labels) loss.backward() print('Gradient -', model[0].weight.grad) # Take an update step and few the new weights optimizer.step() print('Updated weights - ', model[0].weight)
DEEP LEARNING/Pytorch from scratch/MLP/Part 3 - Training Neural Networks (Solution).ipynb
Diyago/Machine-Learning-scripts
apache-2.0
51ce9e98a3f7052fa0ac787c8390163f
Training for real Now we'll put this algorithm into a loop so we can go through all the images. Some nomenclature, one pass through the entire dataset is called an epoch. So here we're going to loop through trainloader to get our training batches. For each batch, we'll doing a training pass where we calculate the loss, do a backwards pass, and update the weights. Exercise: Implement the training pass for our network. If you implemented it correctly, you should see the training loss drop with each epoch.
model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10), nn.LogSoftmax(dim=1)) criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.003) epochs = 5 for e in range(epochs): running_loss = 0 for images, labels in trainloader: # Flatten MNIST images into a 784 long vector images = images.view(images.shape[0], -1) # TODO: Training pass optimizer.zero_grad() output = model(images) loss = criterion(output, labels) loss.backward() optimizer.step() running_loss += loss.item() else: print(f"Training loss: {running_loss/len(trainloader)}")
DEEP LEARNING/Pytorch from scratch/MLP/Part 3 - Training Neural Networks (Solution).ipynb
Diyago/Machine-Learning-scripts
apache-2.0
ff749e44f419eda49f9e082fd564f9b8
With the network trained, we can check out it's predictions.
%matplotlib inline import helper images, labels = next(iter(trainloader)) img = images[0].view(1, 784) # Turn off gradients to speed up this part with torch.no_grad(): logps = model(img) # Output of the network are log-probabilities, need to take exponential for probabilities ps = torch.exp(logps) helper.view_classify(img.view(1, 28, 28), ps)
DEEP LEARNING/Pytorch from scratch/MLP/Part 3 - Training Neural Networks (Solution).ipynb
Diyago/Machine-Learning-scripts
apache-2.0
cbd95b79def701acd35a43024232b17b
Build LSI Model
model_tfidf = models.TfidfModel(corpus) corpus_tfidf = model_tfidf[corpus]
Gensim - Word2Vec.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
f39b27817dbf704ec3eebc865b95cd58
LsiModel的參數 num_topics=200: 設定SVD分解後要保留的維度 id2word: 提供corpus的字典,方便將id轉換為word chunksize=20000: 在記憶體中一次處理的量,值越大則占用記憶體越多,處理速度也越快 decay=1.0: 因為資料會切成chunk來計算,所以會分成新舊資料,當新的chunk進來時,decay是舊chunk的加權,如果設小於1.0的值,則舊的資料會慢慢「遺忘」 distributed=False: 是否開啟分散式計算,每個core會分到一塊chunk onepass=True: 設為False強制使用multi-pass stochastic algoritm power_iters=2: 在multi-pass時設定power iteration,越大則accuracy越高,但時間越久 令$X$代表corpus的TF-IDF矩陣,作完SVD分解後,會得到左矩陣lsi.projection.u及singular value lsi.projection.s。 $X = USV^T$, where $U \in \mathbb{R}^{|V|\times m}$, $S \in \mathbb{R}^{m\times m}$, $V \in \mathbb{R}^{m\times |D|}$ lsi[X]等同於$U^{-1}X=VS$。所以要求$V$的值,可以用$S^{-1}U^{-1}X$,也就是lsi[X]除以$S$。 因為lsi[X]本身沒有值,只是一個generator,要先透過gensim.matutils.corpus2dense轉換成numpy array,再除以lsi.projection.s。
model_lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=200) corpus_lsi = model_lsi[corpus_tfidf] # 計算V的方法,可以作為document vector docvec_lsi = gensim.matutils.corpus2dense(corpus_lsi, len(model_lsi.projection.s)).T / model_lsi.projection.s # word vector直接用U的column vector wordsim_lsi = similarities.MatrixSimilarity(model_lsi.projection.u, num_features=model_lsi.projection.u.shape[1]) # 第二個版本,word vector用U*S wordsim_lsi2 = similarities.MatrixSimilarity(model_lsi.projection.u * model_lsi.projection.s, num_features=model_lsi.projection.u.shape[1]) def lsi_query(query, use_ver2=False): qvec = model_lsi[model_tfidf[dictionary.doc2bow(query.split())]] if use_ver2: s = wordsim_lsi2[qvec] else: s = wordsim_lsi[qvec] return [dictionary[i] for i in s.argsort()[-10:]] print lsi_query('energy') print lsi_query('energy', True)
Gensim - Word2Vec.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
45e13283e8744919b035f66802d20135
Build Word2Vec Model Word2Vec的參數 sentences: 用來訓練的list of list of words,但不是必要的,因為可以先建好model,再慢慢丟資料訓練 size=100: vector的維度 alpha=0.025: 初始的學習速度 window=5: context window的大小 min_count=5: 出現次數小於min_count的單字直接忽略 max_vocab_size: 限制vocabulary的大小,如果單字太多,就忽略最少見的單字,預設為無限制 sample=0.001: subsampling,隨機刪除機率小於0.001的單字,兼具擴大context windows與減少stopword的功能 seed=1: 隨機產生器的random seed workers=3: 在多核心的系統上,要用幾個核心來train min_alpha=0.0001: 學習速度最後收斂的最小值 sg=0: 0表示用CBOW,1表示用skip-gram hs=0: 1表示用hierarchical soft-max,0表示用negative sampling negative=5: 表示使用幾組negative sample來訓練 cbow_mean=1: 在使用CBOW的前提下,0表示使用sum作為hidden layer,1表示使用mean作為hidden layer hashfxn=&lt;build-in hash function&gt;: 隨機初始化weights使用的hash function iter=5: 整個corpus要訓練幾次 trim_rule: None表示小於min_count的單字會被忽略,也可以指定一個function(word, count, min_count),這個function的傳回值有三種,util.RULE_DISCARD、util.RULE_KEEP、util.RULE_DEFAULT。這個參數會影響dictionary的生成 sorted_vocab=1: 1表示在指定word index前,先按照頻率將單字排序 batch_words=10000: 要傳給worker的單字長度 訓練方法 先產生一個空的model model_w2v = models.Word2Vec(size=200, sg=1) 傳入一個list of words更新vocabulary sent = [['first','sent'], ['second','sent']] model_w2v.build_vocab(sent) 傳入一個list of words更新model model_w2v.train(sent)
all_text = [doc.split() for doc in documents] model_w2v = models.Word2Vec(size=200, sg=1) %timeit model_w2v.build_vocab(all_text) %timeit model_w2v.train(all_text) model_w2v.most_similar_cosmul(['deep','learning'])
Gensim - Word2Vec.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
78713f0ec606946f866fce54b2279c73
Build Doc2Vec Model Doc2Vec的參數 documents=None: 用來訓練的document,可以是list of TaggedDocument,或TaggedDocument generator size=300: vector的維度 alpha=0.025: 初始的學習速度 window=8: context window的大小 min_count=5: 出現次數小於min_count的單字直接忽略 max_vocab_size=None: 限制vocabulary的大小,如果單字太多,就忽略最少見的單字,預設為無限制 sample=0: subsampling,隨機刪除機率小於sample的單字,兼具擴大context windows與減少stopword的功能 seed=1: 隨機產生器的random seed workers=1: 在多核心的系統上,要用幾個核心來train min_alpha=0.0001: 學習速度最後收斂的最小值 hs=1: 1表示用hierarchical soft-max,0表示用negative sampling negative=0: 表示使用幾組negative sample來訓練 dbow_words=0: 1表示同時訓練出word-vector(用skip-gram)及doc-vector(用DBOW),0表示只訓練doc-vector dm=1: 1表示用distributed memory(PV-DM)來訓練,0表示用distributed bag-of-word(PV-DBOW)來訓練 dm_concat=0: 1表示不要sum/average而用concatenation of context vectors,0表示用sum/average。使用concatenation會產生較大的model,而且輸入的vector長度會變長 dm_mean=0: 在使用DBOW而且dm_concat=0的前提下,0表示使用sum作為hidden layer,1表示使用mean作為hidden layer dm_tag_count=1: 當dm_concat=1時,預期每個document有幾個document tags trim_rule=None: None表示小於min_count的單字會被忽略,也可以指定一個function(word, count, min_count),這個function的傳回值有三種,util.RULE_DISCARD、util.RULE_KEEP、util.RULE_DEFAULT。這個參數會影響dictionary的生成
from gensim.models.doc2vec import Doc2Vec, TaggedDocument class PatentDocGenerator(object): def __init__(self, filename): self.filename = filename def __iter__(self): f = codecs.open(self.filename, 'r', 'UTF-8') for line in f: text, appnum = docs_out(line) yield TaggedDocument(text.split(), appnum.split()) doc = PatentDocGenerator('/share/USPatentData/tokenized_appDate_2013/2013USPTOPatents_by_skip_1.txt.tokenized') %timeit model_d2v = Doc2Vec(doc, size=200, window=8, sample=1e-5, hs=0, negative=5) doc = PatentDocGenerator('/share/USPatentData/tokenized_appDate_2013/2013USPTOPatents_by_skip_1.txt.tokenized') model_d2v = Doc2Vec(doc, size=200, window=8, sample=1e-5, hs=0, negative=5) model_d2v.docvecs.most_similar(['20140187118']) m = Doc2Vec(size=200, window=8, sample=1e-5, hs=0, negative=5) m.build_vocab(doc) m.train(doc) m.docvecs.most_similar(['20140187118'])
Gensim - Word2Vec.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
8cedc5327cd0d58cfb92915f9b9300a1
Build Doc2Vec Model from 2013 USPTO Patents
from gensim.models.doc2vec import Doc2Vec, TaggedDocument class PatentDocGenerator(object): def __init__(self, filename): self.filename = filename def __iter__(self): f = codecs.open(self.filename, 'r', 'UTF-8') for line in f: text, appnum = docs_out(line) yield TaggedDocument(text.split(), appnum.split()) model_d2v = Doc2Vec(size=200, window=8, sample=1e-5, hs=0, negative=5) root = '/share/USPatentData/tokenized_appDate_2013/' for fn in sorted(listdir(root)): doc = PatentDocGenerator(os.path.join(root, fn)) start = dt.now() model_d2v.build_vocab(doc) model_d2v.train(doc) logging.info('{} training time: {}'.format(fn, str(dt.now() - start))) model_d2v.save("doc2vec_uspto_2013.model")
Gensim - Word2Vec.ipynb
banyh/ShareIPythonNotebook
gpl-3.0
52c8ad4c77e37747ebd63e9698ba219e
To start we'll need some basic libraries. First numpy will be needed for basic array manipulation. Since we will be visualising the results we will need matplotlib and seaborn. Finally we will need umap for doing the dimension reduction itself.
!pip install numpy matplotlib seaborn umap-learn
notebooks/AnimatingUMAP.ipynb
lmcinnes/umap
bsd-3-clause
717d0097e68c191c1ed9b9b99219f6b6
To start let's load everything we'll need
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable from matplotlib import animation from IPython.display import HTML import seaborn as sns import itertools sns.set(style='white', rc={'figure.figsize':(14, 12), 'animation.html': 'html5'}) # Ignore UserWarnings import warnings warnings.simplefilter('ignore', UserWarning) from sklearn.datasets import load_digits from umap import UMAP
notebooks/AnimatingUMAP.ipynb
lmcinnes/umap
bsd-3-clause
d00367c3ba28c68334fb02e136929cc5
To try this out we'll needs a reasonably small dataset (so embedding runs don't take too long since we'll be doing a lot of them). For ease of reproducibility for everyone else I'll use the digits dataset from sklearn. If you want to try other datasets just drop them in here -- COIL20 might be interesting, or you might have your own data.
digits = load_digits() data = digits.data data
notebooks/AnimatingUMAP.ipynb
lmcinnes/umap
bsd-3-clause
be5ec579fbad1016ffb0d35c8dbfd406
We need to move the points in between the embeddings given by different parameter values. There are potentially fancy ways to do this (Something using rotation and reflection to get an initial alignment might be interesting), but we'll use straighforward linear interpolation between the two embeddings. To do this we'll need a simple function that can turn out intermediate embeddings for the in-between frames of the animation.
def tween(e1, e2, n_frames=20): for i in range(5): yield e1 for i in range(n_frames): alpha = i / float(n_frames - 1) yield (1 - alpha) * e1 + alpha * e2 for i in range(5): yield(e2) return
notebooks/AnimatingUMAP.ipynb
lmcinnes/umap
bsd-3-clause
4ef097d4a455e7b94a72fbb82d48873e
Now that we can fill in intermediate frame we just need to generate all the embeddings. We'll create a function that can take an argument and set of parameter values and then generate all the embeddings including the in-between frames.
def generate_frame_data(data, arg_name='n_neighbors', arg_list=[]): result = [] es = [] for arg in arg_list: kwargs = {arg_name:arg} if len(es) > 0: es.append(UMAP(init=es[-1], negative_sample_rate=3, **kwargs).fit_transform(data)) else: es.append(UMAP(negative_sample_rate=3, **kwargs).fit_transform(data)) for e1, e2 in zip(es[:-1], es[1:]): result.extend(list(tween(e1, e2))) return result
notebooks/AnimatingUMAP.ipynb
lmcinnes/umap
bsd-3-clause
b9d182c28852775af8b099cd0a36f965
Next we just need to create a function to actually generate the animation given a list of embeddings (one for each frame). This is really just a matter of workign through the details of how matplotlib generates animations -- I would refer you again to Jake's tutorial if you are interested in the detailed mechanics of this.
def create_animation(frame_data, arg_name='n_neighbors', arg_list=[]): fig, ax = plt.subplots() all_data = np.vstack(frame_data) frame_bounds = (all_data[:, 0].min() * 1.1, all_data[:, 0].max() * 1.1, all_data[:, 1].min() * 1.1, all_data[:, 1].max() * 1.1) ax.set_xlim(frame_bounds[0], frame_bounds[1]) ax.set_ylim(frame_bounds[2], frame_bounds[3]) points = ax.scatter(frame_data[0][:, 0], frame_data[0][:, 1], s=5, c=digits.target, cmap='Spectral', animated=True) title = ax.set_title('', fontsize=24) ax.set_xticks([]) ax.set_yticks([]) cbar = fig.colorbar( points, cax=make_axes_locatable(ax).append_axes("right", size="5%", pad=0.05), orientation="vertical", values=np.arange(10), boundaries=np.arange(11)-0.5, ticks=np.arange(10), drawedges=True, ) cbar.ax.yaxis.set_ticklabels(np.arange(10), fontsize=18) def init(): points.set_offsets(frame_data[0]) arg = arg_list[0] arg_str = f'{arg:.3f}' if isinstance(arg, float) else f'{arg}' title.set_text(f'UMAP with {arg_name}={arg_str}') return (points,) def animate(i): points.set_offsets(frame_data[i]) if (i + 15) % 30 == 0: arg = arg_list[(i + 15) // 30] arg_str = f'{arg:.3f}' if isinstance(arg, float) else f'{arg}' title.set_text(f'UMAP with {arg_name}={arg_str}') return (points,) anim = animation.FuncAnimation(fig, animate, init_func=init, frames=len(frame_data), interval=20, blit=True) plt.close() return anim
notebooks/AnimatingUMAP.ipynb
lmcinnes/umap
bsd-3-clause
11ab210e4319a665e11ec6b428233744